2026-03-10T14:31:24.190 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T14:31:24.194 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T14:31:24.217 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068 branch: squid description: orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 1-start 2-services/nfs-keepalive-only 3-final} email: null first_in_suite: false flavor: default job_id: '1068' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_DAEMON_PLACE_FAIL - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - - host.b - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKOIpdfW3zG5g/V66Su2ivvtLicboInaUNU8VLVLOryrYc4E7eXsecVeedr4YMfn8I0A2wVLOV3g8veA6UcUMoU= vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIKp6hTwcnGD8ukeIQNr75MELWv+2Zv62Bvqn/P46WBdcLSwrk3BOszcJJAnPikiTavDgHtcpBaHk6v+wfts5cI= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - cephadm: roleless: true - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - vip: null - cephadm.shell: host.a: - ceph orch device ls --refresh - vip.exec: all-hosts: - systemctl stop nfs-server - cephadm.shell: host.a: - ceph fs volume create foofs - cephadm.apply: specs: - placement: count: 1 service_id: foo service_type: nfs spec: port: 2049 virtual_ip: '{{VIP0}}' - placement: count: 1 service_id: nfs.foo service_type: ingress spec: backend_service: nfs.foo keepalive_only: true monitor_port: 9002 virtual_ip: '{{VIP0}}/{{VIPPREFIXLEN}}' - cephadm.wait_for_service: service: nfs.foo - cephadm.wait_for_service: service: ingress.nfs.foo - cephadm.shell: host.a: - ceph nfs export create cephfs --fsname foofs --cluster-id foo --pseudo-path /fake - vip.exec: host.a: - mkdir /mnt/foo - sleep 5 - mount -t nfs {{VIP0}}:/fake /mnt/foo - echo test > /mnt/foo/testfile - sync - cephadm.shell: host.a: - stat -c '%u %g' /var/log/ceph | grep '167 167' - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - ceph orch ls | grep '^osd.all-available-devices ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T14:31:24.217 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T14:31:24.217 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T14:31:24.217 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T14:31:24.218 INFO:teuthology.task.internal:Checking packages... 2026-03-10T14:31:24.218 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T14:31:24.218 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T14:31:24.218 INFO:teuthology.packaging:ref: None 2026-03-10T14:31:24.218 INFO:teuthology.packaging:tag: None 2026-03-10T14:31:24.218 INFO:teuthology.packaging:branch: squid 2026-03-10T14:31:24.218 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:31:24.218 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T14:31:24.987 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T14:31:24.988 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T14:31:24.992 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T14:31:24.992 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T14:31:24.993 INFO:teuthology.task.internal:Saving configuration 2026-03-10T14:31:24.997 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T14:31:24.998 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T14:31:25.005 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 14:30:10.367947', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKOIpdfW3zG5g/V66Su2ivvtLicboInaUNU8VLVLOryrYc4E7eXsecVeedr4YMfn8I0A2wVLOV3g8veA6UcUMoU='} 2026-03-10T14:31:25.011 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 14:30:10.367479', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIKp6hTwcnGD8ukeIQNr75MELWv+2Zv62Bvqn/P46WBdcLSwrk3BOszcJJAnPikiTavDgHtcpBaHk6v+wfts5cI='} 2026-03-10T14:31:25.011 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T14:31:25.012 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['host.a', 'client.0'] 2026-03-10T14:31:25.012 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['host.b', 'client.1'] 2026-03-10T14:31:25.012 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T14:31:25.018 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-10T14:31:25.024 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-10T14:31:25.024 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f7438ec3e20>, signals=[15]) 2026-03-10T14:31:25.024 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T14:31:25.025 INFO:teuthology.task.internal:Opening connections... 2026-03-10T14:31:25.025 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-10T14:31:25.026 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:31:25.084 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-10T14:31:25.084 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:31:25.146 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T14:31:25.148 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-10T14:31:25.196 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-10T14:31:25.196 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:NAME="CentOS Stream" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:ID="centos" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel fedora" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;31" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://centos.org/" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T14:31:25.252 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T14:31:25.252 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-10T14:31:25.257 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-10T14:31:25.273 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-10T14:31:25.274 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:NAME="CentOS Stream" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="9" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:ID="centos" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE="rhel fedora" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="9" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:PLATFORM_ID="platform:el9" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:ANSI_COLOR="0;31" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:LOGO="fedora-logo-icon" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://centos.org/" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T14:31:25.331 INFO:teuthology.orchestra.run.vm03.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T14:31:25.331 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-10T14:31:25.336 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T14:31:25.338 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T14:31:25.339 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T14:31:25.339 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-10T14:31:25.341 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-10T14:31:25.387 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T14:31:25.388 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T14:31:25.388 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-10T14:31:25.396 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-10T14:31:25.411 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T14:31:25.444 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T14:31:25.444 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T14:31:25.453 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-10T14:31:25.468 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:31:25.660 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-10T14:31:25.675 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:31:25.865 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T14:31:25.867 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T14:31:25.867 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T14:31:25.869 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T14:31:25.885 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T14:31:25.887 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T14:31:25.888 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T14:31:25.888 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T14:31:25.925 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T14:31:25.946 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T14:31:25.947 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T14:31:25.947 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T14:31:25.995 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:31:25.995 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T14:31:26.009 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:31:26.009 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T14:31:26.037 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T14:31:26.059 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:31:26.068 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:31:26.075 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:31:26.084 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T14:31:26.086 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T14:31:26.087 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T14:31:26.087 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T14:31:26.112 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T14:31:26.151 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T14:31:26.153 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T14:31:26.153 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T14:31:26.178 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T14:31:26.205 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:31:26.256 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:31:26.312 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:31:26.312 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T14:31:26.373 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:31:26.396 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:31:26.455 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:31:26.455 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T14:31:26.520 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-10T14:31:26.522 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-10T14:31:26.552 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T14:31:26.590 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T14:31:26.990 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T14:31:27.018 INFO:teuthology.task.internal:Starting timer... 2026-03-10T14:31:27.018 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T14:31:27.022 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T14:31:27.025 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T14:31:27.025 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-10T14:31:27.025 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-10T14:31:27.025 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T14:31:27.025 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T14:31:27.025 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T14:31:27.025 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T14:31:27.026 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T14:31:27.027 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T14:31:27.028 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T14:31:27.740 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T14:31:27.746 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T14:31:27.746 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory_kod4kn4 --limit vm00.local,vm03.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T14:33:11.107 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm03.local')] 2026-03-10T14:33:11.108 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-10T14:33:11.108 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:33:11.173 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-10T14:33:11.252 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-10T14:33:11.252 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-10T14:33:11.253 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T14:33:11.315 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-10T14:33:11.396 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-10T14:33:11.396 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T14:33:11.399 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T14:33:11.399 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T14:33:11.399 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:33:11.401 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T14:33:11.401 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:33:11.448 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T14:33:11.466 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T14:33:11.476 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T14:33:11.492 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-03-10T14:33:11.496 INFO:teuthology.orchestra.run.vm03.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T14:33:11.506 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-03-10T14:33:11.524 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T14:33:11.528 INFO:teuthology.orchestra.run.vm03.stderr:sudo: ntpd: command not found 2026-03-10T14:33:11.541 INFO:teuthology.orchestra.run.vm03.stdout:506 Cannot talk to daemon 2026-03-10T14:33:11.560 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T14:33:11.560 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T14:33:11.576 INFO:teuthology.orchestra.run.vm03.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T14:33:11.600 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-10T14:33:11.604 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T14:33:11.604 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-10T14:33:11.604 INFO:teuthology.orchestra.run.vm00.stdout:^? server1a.meinberg.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.604 INFO:teuthology.orchestra.run.vm00.stdout:^? srv01-nc.securepod.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.604 INFO:teuthology.orchestra.run.vm00.stdout:^? node-4.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.604 INFO:teuthology.orchestra.run.vm00.stdout:^? gromit.nocabal.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.626 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-10T14:33:11.628 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T14:33:11.628 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-10T14:33:11.628 INFO:teuthology.orchestra.run.vm03.stdout:^? node-4.infogral.is 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.628 INFO:teuthology.orchestra.run.vm03.stdout:^? srv01-nc.securepod.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.628 INFO:teuthology.orchestra.run.vm03.stdout:^? gromit.nocabal.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.628 INFO:teuthology.orchestra.run.vm03.stdout:^? server1a.meinberg.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T14:33:11.629 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T14:33:11.657 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T14:33:11.676 DEBUG:teuthology.orchestra.run.vm00:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T14:33:11.677 DEBUG:teuthology.orchestra.run.vm03:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T14:33:11.679 DEBUG:teuthology.task.pexec:ubuntu@vm00.local< sudo dnf remove nvme-cli -y 2026-03-10T14:33:11.679 DEBUG:teuthology.task.pexec:ubuntu@vm00.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T14:33:11.679 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm00.local 2026-03-10T14:33:11.679 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T14:33:11.679 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T14:33:11.680 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf remove nvme-cli -y 2026-03-10T14:33:11.680 DEBUG:teuthology.task.pexec:ubuntu@vm03.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T14:33:11.680 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm03.local 2026-03-10T14:33:11.680 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T14:33:11.680 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T14:33:11.916 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: nvme-cli 2026-03-10T14:33:11.916 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-10T14:33:11.919 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-10T14:33:11.920 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-10T14:33:11.920 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-10T14:33:11.930 INFO:teuthology.orchestra.run.vm03.stdout:No match for argument: nvme-cli 2026-03-10T14:33:11.930 INFO:teuthology.orchestra.run.vm03.stderr:No packages marked for removal. 2026-03-10T14:33:11.933 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T14:33:11.934 INFO:teuthology.orchestra.run.vm03.stdout:Nothing to do. 2026-03-10T14:33:11.934 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T14:33:12.420 INFO:teuthology.orchestra.run.vm00.stdout:Last metadata expiration check: 0:01:11 ago on Tue 10 Mar 2026 02:32:01 PM UTC. 2026-03-10T14:33:12.469 INFO:teuthology.orchestra.run.vm03.stdout:Last metadata expiration check: 0:01:04 ago on Tue 10 Mar 2026 02:32:08 PM UTC. 2026-03-10T14:33:12.537 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: Package Architecture Version Repository Size 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Install 6 Packages 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 2.3 M 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Installed size: 11 M 2026-03-10T14:33:12.538 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-03-10T14:33:12.603 INFO:teuthology.orchestra.run.vm03.stdout:Dependencies resolved. 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: Package Architecture Version Repository Size 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Installing: 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Installing dependencies: 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Transaction Summary 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:================================================================================ 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Install 6 Packages 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Total download size: 2.3 M 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Installed size: 11 M 2026-03-10T14:33:12.604 INFO:teuthology.orchestra.run.vm03.stdout:Downloading Packages: 2026-03-10T14:33:12.846 INFO:teuthology.orchestra.run.vm00.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 232 kB/s | 44 kB 00:00 2026-03-10T14:33:12.847 INFO:teuthology.orchestra.run.vm00.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 378 kB/s | 72 kB 00:00 2026-03-10T14:33:12.943 INFO:teuthology.orchestra.run.vm00.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 871 kB/s | 84 kB 00:00 2026-03-10T14:33:12.944 INFO:teuthology.orchestra.run.vm00.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.5 MB/s | 150 kB 00:00 2026-03-10T14:33:12.965 INFO:teuthology.orchestra.run.vm03.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 182 kB/s | 44 kB 00:00 2026-03-10T14:33:13.039 INFO:teuthology.orchestra.run.vm00.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 3.0 MB/s | 1.2 MB 00:00 2026-03-10T14:33:13.093 INFO:teuthology.orchestra.run.vm00.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 5.5 MB/s | 837 kB 00:00 2026-03-10T14:33:13.093 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T14:33:13.093 INFO:teuthology.orchestra.run.vm00.stdout:Total 4.2 MB/s | 2.3 MB 00:00 2026-03-10T14:33:13.165 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-10T14:33:13.175 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-10T14:33:13.175 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-10T14:33:13.210 INFO:teuthology.orchestra.run.vm03.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 148 kB/s | 72 kB 00:00 2026-03-10T14:33:13.245 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-10T14:33:13.245 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-10T14:33:13.418 INFO:teuthology.orchestra.run.vm03.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 186 kB/s | 84 kB 00:00 2026-03-10T14:33:13.437 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-10T14:33:13.450 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T14:33:13.466 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T14:33:13.474 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T14:33:13.484 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T14:33:13.486 INFO:teuthology.orchestra.run.vm00.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T14:33:13.700 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T14:33:13.705 INFO:teuthology.orchestra.run.vm00.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T14:33:14.103 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T14:33:14.103 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T14:33:14.103 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:33:14.388 INFO:teuthology.orchestra.run.vm03.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 128 kB/s | 150 kB 00:01 2026-03-10T14:33:14.787 INFO:teuthology.orchestra.run.vm03.stdout:(5/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 612 kB/s | 837 kB 00:01 2026-03-10T14:33:14.802 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T14:33:14.803 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T14:33:14.803 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T14:33:14.803 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T14:33:14.803 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:33:15.011 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-10T14:33:15.205 DEBUG:teuthology.parallel:result is None 2026-03-10T14:33:15.698 INFO:teuthology.orchestra.run.vm03.stdout:(6/6): nvme-cli-2.16-1.el9.x86_64.rpm 397 kB/s | 1.2 MB 00:02 2026-03-10T14:33:15.698 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-10T14:33:15.698 INFO:teuthology.orchestra.run.vm03.stdout:Total 765 kB/s | 2.3 MB 00:03 2026-03-10T14:33:15.775 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction check 2026-03-10T14:33:15.783 INFO:teuthology.orchestra.run.vm03.stdout:Transaction check succeeded. 2026-03-10T14:33:15.783 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction test 2026-03-10T14:33:15.844 INFO:teuthology.orchestra.run.vm03.stdout:Transaction test succeeded. 2026-03-10T14:33:15.844 INFO:teuthology.orchestra.run.vm03.stdout:Running transaction 2026-03-10T14:33:16.116 INFO:teuthology.orchestra.run.vm03.stdout: Preparing : 1/1 2026-03-10T14:33:16.131 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T14:33:16.143 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T14:33:16.156 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T14:33:16.165 INFO:teuthology.orchestra.run.vm03.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T14:33:16.166 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T14:33:16.358 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T14:33:16.363 INFO:teuthology.orchestra.run.vm03.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T14:33:16.801 INFO:teuthology.orchestra.run.vm03.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T14:33:16.802 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T14:33:16.802 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:33:17.394 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T14:33:17.394 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T14:33:17.394 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T14:33:17.394 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T14:33:17.394 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout:Installed: 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:33:17.489 INFO:teuthology.orchestra.run.vm03.stdout:Complete! 2026-03-10T14:33:17.571 DEBUG:teuthology.parallel:result is None 2026-03-10T14:33:17.571 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T14:33:17.620 INFO:tasks.cephadm:Config: {'roleless': True, 'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_DAEMON_PLACE_FAIL', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T14:33:17.620 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:33:17.620 INFO:tasks.cephadm:Cluster fsid is 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:33:17.620 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T14:33:17.620 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-10T14:33:17.620 INFO:tasks.cephadm:Monitor IPs: {'mon.vm00': '192.168.123.100', 'mon.vm03': '192.168.123.103'} 2026-03-10T14:33:17.620 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T14:33:17.620 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-10T14:33:17.657 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-10T14:33:17.706 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T14:33:17.706 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:33:18.344 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T14:33:18.947 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:33:18.948 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T14:33:18.948 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T14:33:18.948 DEBUG:teuthology.orchestra.run.vm00:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T14:33:20.562 INFO:teuthology.orchestra.run.vm00.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 14:33 /home/ubuntu/cephtest/cephadm 2026-03-10T14:33:20.562 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T14:33:22.098 INFO:teuthology.orchestra.run.vm03.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 14:33 /home/ubuntu/cephtest/cephadm 2026-03-10T14:33:22.098 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T14:33:22.121 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T14:33:22.151 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T14:33:22.151 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T14:33:22.163 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T14:33:22.355 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:33:22.408 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-10T14:34:07.977 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-10T14:34:12.604 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T14:34:12.629 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-10T14:34:12.660 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-10T14:34:12.694 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-10T14:34:12.723 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-10T14:34:12.762 INFO:tasks.cephadm:Writing seed config... 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T14:34:12.762 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T14:34:12.763 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:34:12.763 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T14:34:12.778 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T14:34:12.778 DEBUG:teuthology.orchestra.run.vm00:mon.vm00> sudo journalctl -f -n 0 -u ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00.service 2026-03-10T14:34:12.819 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T14:34:12.819 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:34:12.972 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T14:34:12.972 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-10T14:34:12.972 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T14:34:12.972 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-10T14:34:12.996 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stdout 5.8.0 2026-03-10T14:34:12.996 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-10T14:34:12.996 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-10T14:34:13.004 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T14:34:13.004 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T14:34:13.012 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T14:34:13.012 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:34:13.019 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T14:34:13.026 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T14:34:13.026 INFO:teuthology.orchestra.run.vm00.stdout:Unit chronyd.service is enabled and running 2026-03-10T14:34:13.026 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-10T14:34:13.046 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stdout 5.8.0 2026-03-10T14:34:13.046 INFO:teuthology.orchestra.run.vm00.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T14:34:13.046 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-10T14:34:13.046 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-10T14:34:13.054 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T14:34:13.054 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T14:34:13.060 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T14:34:13.060 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T14:34:13.068 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T14:34:13.076 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T14:34:13.076 INFO:teuthology.orchestra.run.vm00.stdout:Unit chronyd.service is enabled and running 2026-03-10T14:34:13.076 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-10T14:34:13.076 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:13.076 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 139791036026304 on /run/cephadm/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.lock 2026-03-10T14:34:13.076 INFO:teuthology.orchestra.run.vm00.stdout:Lock 139791036026304 acquired on /run/cephadm/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.lock 2026-03-10T14:34:13.077 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-10T14:34:13.077 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-10T14:34:13.077 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T14:34:13.082 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.100 metric 100 2026-03-10T14:34:13.082 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T14:34:13.084 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T14:34:13.084 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link noprefixroute 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T14:34:13.087 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T14:34:13.088 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T14:34:13.088 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T14:34:14.377 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T14:34:14.532 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T14:34:14.532 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T14:34:14.532 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T14:34:14.635 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-10T14:34:14.635 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-10T14:34:14.735 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQDmK7BpJau2KhAArDpDR/nNsJsYnpwMvKV/Bg== 2026-03-10T14:34:14.832 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQDmK7Bpmsh0MBAA/Q8LdlGVniw/XMeuVbVx7g== 2026-03-10T14:34:14.948 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQDmK7Bpv3SBNhAAk1GqP41vVxql9k+mF4Z/qA== 2026-03-10T14:34:14.948 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for vm00 [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:15.053 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-10T14:34:15.336 INFO:teuthology.orchestra.run.vm00.stdout:create mon.vm00 on 2026-03-10T14:34:15.642 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T14:34:15.852 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.target → /etc/systemd/system/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.target. 2026-03-10T14:34:15.853 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.target → /etc/systemd/system/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.target. 2026-03-10T14:34:16.026 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00 2026-03-10T14:34:16.026 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00.service: Unit ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00.service not loaded. 2026-03-10T14:34:16.177 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.target.wants/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00.service → /etc/systemd/system/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@.service. 2026-03-10T14:34:16.444 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:34:16.445 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T14:34:16.445 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-10T14:34:16.445 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-10T14:34:16.487 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:16 vm00 systemd[1]: Started Ceph mon.vm00 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf. 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm00 (age 0.139072s) 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-10T14:34:16.691 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T14:34:16.743 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:16 vm00 ceph-mon[46909]: mkfs 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:16.743 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:16 vm00 ceph-mon[46909]: mon.vm00 is new leader, mons vm00 in quorum (ranks 0) 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T14:34:16.884 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-10T14:34:17.091 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-10T14:34:17.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 systemd[1]: Stopping Ceph mon.vm00 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf... 2026-03-10T14:34:17.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00[46905]: 2026-03-10T14:34:17.181+0000 7fbea1b0a640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm00 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:34:17.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00[46905]: 2026-03-10T14:34:17.181+0000 7fbea1b0a640 -1 mon.vm00@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 podman[47110]: 2026-03-10 14:34:17.448360576 +0000 UTC m=+0.280923173 container died 92bab47006be558bb960eeab8e6819f1786e46a14a2de12ee473ee4ed6e624c3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2) 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 podman[47110]: 2026-03-10 14:34:17.460934664 +0000 UTC m=+0.293497261 container remove 92bab47006be558bb960eeab8e6819f1786e46a14a2de12ee473ee4ed6e624c3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3) 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 bash[47110]: ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 systemd[1]: ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00.service: Deactivated successfully. 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 systemd[1]: Stopped Ceph mon.vm00 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf. 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 systemd[1]: Starting Ceph mon.vm00 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf... 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 podman[47178]: 2026-03-10 14:34:17.625509331 +0000 UTC m=+0.018797821 container create 6d040919b8d4f026bab4d47abda8adcd2dd0dbc2d7826e9301c1ce3ac91282e8 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2) 2026-03-10T14:34:17.717 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 podman[47178]: 2026-03-10 14:34:17.617422202 +0000 UTC m=+0.010710701 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T14:34:17.884 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 podman[47178]: 2026-03-10 14:34:17.871083969 +0000 UTC m=+0.264372459 container init 6d040919b8d4f026bab4d47abda8adcd2dd0dbc2d7826e9301c1ce3ac91282e8 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 podman[47178]: 2026-03-10 14:34:17.873968369 +0000 UTC m=+0.267256859 container start 6d040919b8d4f026bab4d47abda8adcd2dd0dbc2d7826e9301c1ce3ac91282e8 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 bash[47178]: 6d040919b8d4f026bab4d47abda8adcd2dd0dbc2d7826e9301c1ce3ac91282e8 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 systemd[1]: Started Ceph mon.vm00 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf. 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: pidfile_write: ignore empty --pid-file 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: load: jerasure load: lrc 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: RocksDB version: 7.9.2 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Git sha 0 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: DB SUMMARY 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: DB Session ID: DXTI8PNX851T69JCUL31 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: CURRENT file: CURRENT 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm00/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm00/store.db: 000009.log size: 75099 ; 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.error_if_exists: 0 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.create_if_missing: 0 2026-03-10T14:34:17.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.paranoid_checks: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.env: 0x55999be67dc0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.info_log: 0x55999d496b20 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.statistics: (nil) 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.use_fsync: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_log_file_size: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.allow_fallocate: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.use_direct_reads: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.db_log_dir: 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.wal_dir: 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.write_buffer_manager: 0x55999d49b900 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.unordered_write: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.row_cache: None 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.wal_filter: None 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.two_write_queues: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.wal_compression: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.atomic_flush: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.log_readahead_size: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T14:34:17.983 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_background_jobs: 2 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_background_compactions: -1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_subcompactions: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_open_files: -1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_background_flushes: -1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Compression algorithms supported: 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kZSTD supported: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kXpressCompression supported: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kBZip2Compression supported: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kLZ4Compression supported: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kZlibCompression supported: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: kSnappyCompression supported: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm00/store.db/MANIFEST-000010 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.merge_operator: 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_filter: None 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55999d4966e0) 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: cache_index_and_filter_blocks: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: pin_top_level_index_and_filter: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: index_type: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: data_block_index_type: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: index_shortening: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: checksum: 4 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: no_block_cache: 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache: 0x55999d4bb350 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache_name: BinnedLRUCache 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache_options: 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: capacity : 536870912 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: num_shard_bits : 4 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: strict_capacity_limit : 0 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: high_pri_pool_ratio: 0.000 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache_compressed: (nil) 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: persistent_cache: (nil) 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_size: 4096 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_size_deviation: 10 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_restart_interval: 16 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: index_block_restart_interval: 1 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: metadata_block_size: 4096 2026-03-10T14:34:17.984 INFO:journalctl@ceph.mon.vm00.vm00.stdout: partition_filters: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: use_delta_encoding: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: filter_policy: bloomfilter 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: whole_key_filtering: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: verify_compression: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: read_amp_bytes_per_bit: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: format_version: 5 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: enable_index_compression: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_align: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: max_auto_readahead_size: 262144 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: prepopulate_block_cache: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: initial_auto_readahead_size: 8192 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression: NoCompression 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.num_levels: 7 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T14:34:17.985 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.inplace_update_support: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.bloom_locality: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.max_successive_merges: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.ttl: 2592000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.enable_blob_files: false 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.min_blob_size: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm00/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 750043ee-64b2-4b09-9c86-2187847ebd73 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773153257904228, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773153257909081, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 223, "table_properties": {"data_size": 70446, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9562, "raw_average_key_size": 49, "raw_value_size": 65071, "raw_average_value_size": 335, "num_data_blocks": 8, "num_entries": 194, "num_filter_entries": 194, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773153257, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "750043ee-64b2-4b09-9c86-2187847ebd73", "db_session_id": "DXTI8PNX851T69JCUL31", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773153257909167, "job": 1, "event": "recovery_finished"} 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm00/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55999d4bce00 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: DB pointer 0x55999d5d6000 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** DB Stats ** 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T14:34:17.986 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** Compaction Stats [default] ** 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: L0 2/0 72.35 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 45.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Sum 2/0 72.35 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 45.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 45.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** Compaction Stats [default] ** 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 45.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative compaction: 0.00 GB write, 6.93 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval compaction: 0.00 GB write, 6.93 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Block cache BinnedLRUCache@0x55999d4bb350#2 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: starting mon.vm00 rank 0 at public addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] at bind addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon_data /var/lib/ceph/mon/ceph-vm00 fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???) e1 preinit fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).mds e1 new map 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).mds e1 print_map 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: e1 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: btime 2026-03-10T14:34:16:500910+0000 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: legacy client fscid: -1 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout: No filesystems configured 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T14:34:17.987 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mon.vm00 is new leader, mons vm00 in quorum (ranks 0) 2026-03-10T14:34:18.097 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T14:34:18.098 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:34:18.098 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-10T14:34:18.099 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T14:34:18.099 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T14:34:18.099 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-10T14:34:18.257 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mgr.vm00.qkhroe 2026-03-10T14:34:18.257 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mgr.vm00.qkhroe.service: Unit ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mgr.vm00.qkhroe.service not loaded. 2026-03-10T14:34:18.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: monmap epoch 1 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: last_changed 2026-03-10T14:34:15.033123+0000 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: created 2026-03-10T14:34:15.033123+0000 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: min_mon_release 19 (squid) 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: election_strategy: 1 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.vm00 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: fsmap 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T14:34:18.265 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:17 vm00 ceph-mon[47192]: mgrmap e1: no daemons active 2026-03-10T14:34:18.390 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf.target.wants/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mgr.vm00.qkhroe.service → /etc/systemd/system/ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@.service. 2026-03-10T14:34:18.561 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:34:18.561 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T14:34:18.561 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:34:18.561 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-10T14:34:18.561 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-10T14:34:18.561 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-10T14:34:18.794 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf", 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "vm00" 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:34:16:500910+0000", 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:34:18.795 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:34:16.501773+0000", 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:18.796 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-10T14:34:19.551 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:19 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/4202151414' entity='client.admin' 2026-03-10T14:34:19.551 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:19 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/11321444' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:34:21.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:21.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf", 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "vm00" 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:34:16:500910+0000", 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:34:21.220 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:34:16.501773+0000", 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:21.221 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-10T14:34:21.444 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3312659385' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: Activating manager daemon vm00.qkhroe 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: mgrmap e2: vm00.qkhroe(active, starting, since 0.00374851s) 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr metadata", "who": "vm00.qkhroe", "id": "vm00.qkhroe"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: Manager daemon vm00.qkhroe is now available 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/trash_purge_schedule"}]: dispatch 2026-03-10T14:34:22.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:22 vm00 ceph-mon[47192]: from='mgr.14100 192.168.123.100:0/2947973822' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf", 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "vm00" 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:23.729 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T14:34:16:500910+0000", 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T14:34:16.501773+0000", 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:23.730 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-10T14:34:23.994 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T14:34:23.995 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-10T14:34:24.039 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:24 vm00 ceph-mon[47192]: mgrmap e3: vm00.qkhroe(active, since 1.00895s) 2026-03-10T14:34:24.039 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:24 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/875669710' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T14:34:24.039 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:24 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1753893263' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T14:34:25.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:25 vm00 ceph-mon[47192]: mgrmap e4: vm00.qkhroe(active, since 2s) 2026-03-10T14:34:25.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:25 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2472567816' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T14:34:25.382 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:25.382 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T14:34:25.383 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T14:34:25.383 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "vm00.qkhroe", 2026-03-10T14:34:25.383 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T14:34:25.383 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:25.383 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T14:34:25.383 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 5... 2026-03-10T14:34:26.285 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:26 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2472567816' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T14:34:26.285 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:26 vm00 ceph-mon[47192]: mgrmap e5: vm00.qkhroe(active, since 3s) 2026-03-10T14:34:26.285 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:26 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1652347905' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: Active manager daemon vm00.qkhroe restarted 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: Activating manager daemon vm00.qkhroe 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: mgrmap e6: vm00.qkhroe(active, starting, since 0.606685s) 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr metadata", "who": "vm00.qkhroe", "id": "vm00.qkhroe"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: Manager daemon vm00.qkhroe is now available 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:34:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:29 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:34:30.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:30.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T14:34:30.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T14:34:30.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:30.165 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 5 is available 2026-03-10T14:34:30.165 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-10T14:34:30.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:30 vm00 ceph-mon[47192]: Found migration_current of "None". Setting to last migration. 2026-03-10T14:34:30.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:30 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:34:30.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:30 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/trash_purge_schedule"}]: dispatch 2026-03-10T14:34:30.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:30 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:30.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:30 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:30.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:30 vm00 ceph-mon[47192]: mgrmap e7: vm00.qkhroe(active, since 1.60936s) 2026-03-10T14:34:30.715 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T14:34:30.715 INFO:teuthology.orchestra.run.vm00.stdout:Generating ssh key... 2026-03-10T14:34:31.276 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxPAiOUOg3cDNNv8yv1nXnaga6MuudbVPPfPefGaDTI ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:31.277 INFO:teuthology.orchestra.run.vm00.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:34:31.277 INFO:teuthology.orchestra.run.vm00.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T14:34:31.277 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-10T14:34:31.502 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:30] ENGINE Bus STARTING 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:30] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:30] ENGINE Client ('192.168.123.100', 35552) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:30] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:30] ENGINE Bus STARTED 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: Generating ssh key... 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:31.503 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:31 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:32.703 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:32 vm00 ceph-mon[47192]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:32.703 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:32 vm00 ceph-mon[47192]: mgrmap e8: vm00.qkhroe(active, since 2s) 2026-03-10T14:34:32.703 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:32 vm00 ceph-mon[47192]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:32.703 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:32 vm00 ceph-mon[47192]: Deploying cephadm binary to vm00 2026-03-10T14:34:33.250 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-10T14:34:33.250 INFO:teuthology.orchestra.run.vm00.stdout:Deploying mon service with default placement... 2026-03-10T14:34:33.620 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T14:34:33.620 INFO:teuthology.orchestra.run.vm00.stdout:Deploying mgr service with default placement... 2026-03-10T14:34:33.894 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T14:34:33.894 INFO:teuthology.orchestra.run.vm00.stdout:Deploying crash service with default placement... 2026-03-10T14:34:34.169 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-10T14:34:34.170 INFO:teuthology.orchestra.run.vm00.stdout:Deploying ceph-exporter service with default placement... 2026-03-10T14:34:34.398 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: Added host vm00 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: Saving service mon spec with placement count:5 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: Saving service mgr spec with placement count:2 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:34.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:34 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:34.447 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-10T14:34:34.448 INFO:teuthology.orchestra.run.vm00.stdout:Deploying prometheus service with default placement... 2026-03-10T14:34:34.739 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-10T14:34:34.739 INFO:teuthology.orchestra.run.vm00.stdout:Deploying grafana service with default placement... 2026-03-10T14:34:35.101 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-10T14:34:35.101 INFO:teuthology.orchestra.run.vm00.stdout:Deploying node-exporter service with default placement... 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: Saving service crash spec with placement * 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: Saving service ceph-exporter spec with placement * 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: Saving service prometheus spec with placement count:1 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: Saving service grafana spec with placement count:1 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:35.497 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:35 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:35.548 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-10T14:34:35.548 INFO:teuthology.orchestra.run.vm00.stdout:Deploying alertmanager service with default placement... 2026-03-10T14:34:35.993 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-10T14:34:36.693 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: Saving service node-exporter spec with placement * 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: Saving service alertmanager spec with placement count:1 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:36.940 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:36 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1262793946' entity='client.admin' 2026-03-10T14:34:37.920 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:37 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1831458689' entity='client.admin' 2026-03-10T14:34:37.920 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:37 vm00 ceph-mon[47192]: from='mgr.14118 192.168.123.100:0/3617820306' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:37.920 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:37 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/788957001' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "vm00.qkhroe", 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T14:34:38.059 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 9... 2026-03-10T14:34:38.955 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:38 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/788957001' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T14:34:38.955 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:38 vm00 ceph-mon[47192]: mgrmap e9: vm00.qkhroe(active, since 9s) 2026-03-10T14:34:38.955 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:38 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2707459913' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: Active manager daemon vm00.qkhroe restarted 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: Activating manager daemon vm00.qkhroe 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: mgrmap e10: vm00.qkhroe(active, starting, since 0.0318325s) 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr metadata", "who": "vm00.qkhroe", "id": "vm00.qkhroe"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: Manager daemon vm00.qkhroe is now available 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:34:41.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:41 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/trash_purge_schedule"}]: dispatch 2026-03-10T14:34:42.324 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T14:34:42.324 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T14:34:42.324 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T14:34:42.324 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T14:34:42.324 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 9 is available 2026-03-10T14:34:42.324 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-10T14:34:42.711 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T14:34:42.711 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:42] ENGINE Bus STARTING 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:42] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:42] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:42] ENGINE Bus STARTED 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: [10/Mar/2026:14:34:42] ENGINE Client ('192.168.123.100', 44922) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: mgrmap e11: vm00.qkhroe(active, since 1.0388s) 2026-03-10T14:34:42.911 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T14:34:42.912 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T14:34:42.912 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:42.912 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:42.912 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:42.912 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:42 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:43.150 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$VdYkXJmeBiQsFGcWHeYHPemY/bemxMndHNdog50wb3UJdMqQ6U1/m", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773153283, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T14:34:43.150 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout: Password: 7aiavcecm3 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.433 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config directory 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:43.743 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-10T14:34:43.780 INFO:tasks.cephadm:Fetching config... 2026-03-10T14:34:43.780 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:34:43.780 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T14:34:43.803 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T14:34:43.803 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:34:43.803 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T14:34:43.865 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T14:34:43.866 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:34:43.866 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/keyring of=/dev/stdout 2026-03-10T14:34:43.934 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T14:34:43.934 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:34:43.934 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T14:34:43.997 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T14:34:43.998 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxPAiOUOg3cDNNv8yv1nXnaga6MuudbVPPfPefGaDTI ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T14:34:44.087 INFO:teuthology.orchestra.run.vm00.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxPAiOUOg3cDNNv8yv1nXnaga6MuudbVPPfPefGaDTI ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:44.119 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxPAiOUOg3cDNNv8yv1nXnaga6MuudbVPPfPefGaDTI ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T14:34:44.155 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxPAiOUOg3cDNNv8yv1nXnaga6MuudbVPPfPefGaDTI ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:34:44.165 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T14:34:44.363 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:34:44.430 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:44 vm00 ceph-mon[47192]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:44.430 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:44 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:44.431 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:44 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1230411954' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T14:34:44.431 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:44 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3418816034' entity='client.admin' 2026-03-10T14:34:44.913 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T14:34:44.914 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T14:34:45.140 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:34:45.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:45 vm00 ceph-mon[47192]: mgrmap e12: vm00.qkhroe(active, since 2s) 2026-03-10T14:34:45.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:45 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/4127659800' entity='client.admin' 2026-03-10T14:34:45.485 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm03 2026-03-10T14:34:45.485 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:34:45.485 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.conf 2026-03-10T14:34:45.503 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:34:45.503 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:34:45.562 INFO:tasks.cephadm:Adding host vm03 to orchestrator... 2026-03-10T14:34:45.562 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch host add vm03 2026-03-10T14:34:45.746 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:34:46.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:46 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: Deploying cephadm binary to vm03 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.client.admin.keyring 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T14:34:47.432 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:47 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:34:47.903 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm03' with addr '192.168.123.103' 2026-03-10T14:34:47.970 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch host ls --format=json 2026-03-10T14:34:48.443 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: Deploying daemon ceph-exporter.vm00 on vm00 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: mgrmap e13: vm00.qkhroe(active, since 6s) 2026-03-10T14:34:49.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:48 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:49.062 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:34:49.062 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}] 2026-03-10T14:34:49.210 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T14:34:49.210 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd crush tunables default 2026-03-10T14:34:49.428 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:34:50.200 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: Deploying daemon crash.vm00 on vm00 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: Added host vm03 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: Deploying daemon node-exporter.vm00 on vm00 2026-03-10T14:34:50.202 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:49 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/849620245' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T14:34:50.256 INFO:tasks.cephadm:Adding mon.vm00 on vm00 2026-03-10T14:34:50.257 INFO:tasks.cephadm:Adding mon.vm03 on vm03 2026-03-10T14:34:50.257 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch apply mon '2;vm00:192.168.123.100=vm00;vm03:192.168.123.103=vm03' 2026-03-10T14:34:50.442 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:50.485 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:50.721 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mon update... 2026-03-10T14:34:50.766 DEBUG:teuthology.orchestra.run.vm03:mon.vm03> sudo journalctl -f -n 0 -u ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm03.service 2026-03-10T14:34:50.767 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:34:50.768 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:34:50.995 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:51.039 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:51.314 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:34:51.314 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:34:51.314 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:34:51.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:51 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/849620245' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T14:34:51.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:51 vm00 ceph-mon[47192]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T14:34:51.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:51 vm00 ceph-mon[47192]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm00:192.168.123.100=vm00;vm03:192.168.123.103=vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:34:51.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:51 vm00 ceph-mon[47192]: Saving service mon spec with placement vm00:192.168.123.100=vm00;vm03:192.168.123.103=vm03;count:2 2026-03-10T14:34:51.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:51 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:52.372 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:34:52.372 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:34:52.553 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:52.576 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:52 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:52.576 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:52 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3346512085' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:34:52.576 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:52 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:52.576 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:52 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:52.576 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:52 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:52.576 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:52 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:52.594 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:52.918 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:34:52.918 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:34:52.918 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:34:53.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:53 vm00 ceph-mon[47192]: Deploying daemon alertmanager.vm00 on vm00 2026-03-10T14:34:53.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:53 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2656721492' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:34:53.966 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:34:53.967 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:34:54.143 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:54.180 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:54.465 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:34:54.465 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:34:54.465 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:34:54.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:54 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/4292008127' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:34:55.511 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:34:55.512 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:34:55.712 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:55.750 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:56.210 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:34:56.210 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:34:56.211 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3932376163' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:34:57.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:56 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:34:57.280 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:34:57.280 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:34:57.500 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:57.540 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:57.888 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:34:57.888 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:34:57.889 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:34:58.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:57 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T14:34:58.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:57 vm00 ceph-mon[47192]: Deploying daemon grafana.vm00 on vm00 2026-03-10T14:34:58.963 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:34:58.963 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:34:59.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:58 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/94911706' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:34:59.150 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:59.202 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:34:59.609 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:34:59.609 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:34:59.609 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:00.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:34:59 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3657666024' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:00.658 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:00.659 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:00.839 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:00.883 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:01.159 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:01.160 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:01.160 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:01.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:01 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/347283826' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:02.238 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:02.238 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:02.430 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:02.471 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:02.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:02 vm00 ceph-mon[47192]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:02.811 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:02.811 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:02.811 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1981444550' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.369 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:03 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:03.896 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:03.896 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:04.091 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:04.135 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:04.428 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:04.429 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:04.429 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:04.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:04 vm00 ceph-mon[47192]: Deploying daemon prometheus.vm00 on vm00 2026-03-10T14:35:04.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:04 vm00 ceph-mon[47192]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:05.503 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:05.503 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:05.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:05 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2803300695' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:05.695 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:05.727 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:06.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:06.051 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:06.051 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:06.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:06 vm00 ceph-mon[47192]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:06.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:06 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1362324637' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:07.206 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:07.206 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:07.403 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:07.441 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:07.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:07 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:07.733 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:07.733 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:07.733 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:08.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:08 vm00 ceph-mon[47192]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:08.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:08 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1633013883' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:08.812 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:08.812 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:08.999 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:09.047 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:09.432 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:09.433 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:09.433 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:09 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2344182208' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:10.524 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:10.524 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:10.552 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:10 vm00 ceph-mon[47192]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:10.740 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:10.781 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:11.147 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:11.147 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:11.147 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:12.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:11 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:12.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:11 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:12.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:11 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:12.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:11 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T14:35:12.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:11 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2925024553' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:12.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:11 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:12.227 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:12.227 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:12.423 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:12.473 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:12.767 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:12.768 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:12.768 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:12.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:12 vm00 ceph-mon[47192]: from='mgr.14162 192.168.123.100:0/842810699' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T14:35:12.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:12 vm00 ceph-mon[47192]: mgrmap e14: vm00.qkhroe(active, since 30s) 2026-03-10T14:35:13.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:13 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1867767938' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:14.004 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:14.004 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:14.193 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:14.240 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:14.517 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:14.517 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:14.517 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:14.982 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:14 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2224457110' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:15.581 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:15.581 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:15.824 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:15.873 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:16.034 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: Active manager daemon vm00.qkhroe restarted 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: Activating manager daemon vm00.qkhroe 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: mgrmap e15: vm00.qkhroe(active, starting, since 0.00604715s) 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr metadata", "who": "vm00.qkhroe", "id": "vm00.qkhroe"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: Manager daemon vm00.qkhroe is now available 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/mirror_snapshot_schedule"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.qkhroe/trash_purge_schedule"}]: dispatch 2026-03-10T14:35:16.035 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:16.394 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:16.394 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:16.394 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:17.460 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:17.461 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: mgrmap e16: vm00.qkhroe(active, since 1.02152s) 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2255522277' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: [10/Mar/2026:14:35:16] ENGINE Bus STARTING 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: [10/Mar/2026:14:35:16] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: [10/Mar/2026:14:35:16] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: [10/Mar/2026:14:35:16] ENGINE Bus STARTED 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: [10/Mar/2026:14:35:16] ENGINE Client ('192.168.123.100', 59070) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:17.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:17 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:17.681 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:17.738 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T14:35:18.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:18.052 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:18.052 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1214027565' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: mgrmap e17: vm00.qkhroe(active, since 2s) 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:18.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:18 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:35:19.148 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:19.148 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:19.554 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:35:19.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.client.admin.keyring 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.client.admin.keyring 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm03", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm03", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T14:35:19.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:19 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:20.171 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:20.171 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:20.171 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:20.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:20 vm00 ceph-mon[47192]: Deploying daemon ceph-exporter.vm03 on vm03 2026-03-10T14:35:20.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:20 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3046959312' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:21.331 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:21.331 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:21.687 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:22.031 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:22.031 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:22.031 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm03", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm03", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:22.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:21 vm00 ceph-mon[47192]: Deploying daemon crash.vm03 on vm03 2026-03-10T14:35:23.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:22 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:23.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:22 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:23.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:22 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:23.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:22 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:23.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:22 vm00 ceph-mon[47192]: Deploying daemon node-exporter.vm03 on vm03 2026-03-10T14:35:23.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:22 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2504156379' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:23.225 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:23.225 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:23.405 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:23.677 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:23.677 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:23.677 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:24.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:23 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3699030872' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:24.749 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:24.749 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:25.104 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:25.435 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:25.435 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:34:15.033123Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T14:35:25.435 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm03.iylznd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm03.iylznd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: Deploying daemon mgr.vm03.iylznd on vm03 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:25.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:35:25.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:25 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:26.444 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 systemd[1]: Starting Ceph mon.vm03 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf... 2026-03-10T14:35:26.588 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T14:35:26.588 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mon dump -f json 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 podman[54077]: 2026-03-10 14:35:26.442774641 +0000 UTC m=+0.022668502 container create d2ba0bf1bcdceb0da5e92e452b42d6a93c39780f2b7750c05f478013becc6581 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20260223, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 podman[54077]: 2026-03-10 14:35:26.48283483 +0000 UTC m=+0.062728711 container init d2ba0bf1bcdceb0da5e92e452b42d6a93c39780f2b7750c05f478013becc6581 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 podman[54077]: 2026-03-10 14:35:26.492050942 +0000 UTC m=+0.071944813 container start d2ba0bf1bcdceb0da5e92e452b42d6a93c39780f2b7750c05f478013becc6581 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223) 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 bash[54077]: d2ba0bf1bcdceb0da5e92e452b42d6a93c39780f2b7750c05f478013becc6581 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 podman[54077]: 2026-03-10 14:35:26.429989661 +0000 UTC m=+0.009883542 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 systemd[1]: Started Ceph mon.vm03 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf. 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: pidfile_write: ignore empty --pid-file 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: load: jerasure load: lrc 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: RocksDB version: 7.9.2 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Git sha 0 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: DB SUMMARY 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: DB Session ID: 6G60731CYY10P7P9C1HD 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: CURRENT file: CURRENT 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm03/store.db dir, Total Num: 0, files: 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm03/store.db: 000004.log size: 511 ; 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.error_if_exists: 0 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.create_if_missing: 0 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.paranoid_checks: 1 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.env: 0x55711d8a7dc0 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.info_log: 0x55711f076de0 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T14:35:26.718 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.statistics: (nil) 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.use_fsync: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_log_file_size: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.allow_fallocate: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.use_direct_reads: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.db_log_dir: 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.wal_dir: 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.write_buffer_manager: 0x55711f07b900 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.unordered_write: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.row_cache: None 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.wal_filter: None 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.two_write_queues: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.wal_compression: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.atomic_flush: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.log_readahead_size: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_background_jobs: 2 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_background_compactions: -1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_subcompactions: 1 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T14:35:26.719 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_open_files: -1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_background_flushes: -1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Compression algorithms supported: 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kZSTD supported: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kXpressCompression supported: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kBZip2Compression supported: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kLZ4Compression supported: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kZlibCompression supported: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: kSnappyCompression supported: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm03/store.db/MANIFEST-000005 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.merge_operator: 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_filter: None 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55711f0765c0) 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: cache_index_and_filter_blocks: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: pin_top_level_index_and_filter: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: index_type: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: data_block_index_type: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: index_shortening: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: checksum: 4 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: no_block_cache: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_cache: 0x55711f09b350 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_cache_name: BinnedLRUCache 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_cache_options: 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: capacity : 536870912 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: num_shard_bits : 4 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: strict_capacity_limit : 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: high_pri_pool_ratio: 0.000 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_cache_compressed: (nil) 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: persistent_cache: (nil) 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_size: 4096 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_size_deviation: 10 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_restart_interval: 16 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: index_block_restart_interval: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: metadata_block_size: 4096 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: partition_filters: 0 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: use_delta_encoding: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: filter_policy: bloomfilter 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: whole_key_filtering: 1 2026-03-10T14:35:26.720 INFO:journalctl@ceph.mon.vm03.vm03.stdout: verify_compression: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: read_amp_bytes_per_bit: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: format_version: 5 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: enable_index_compression: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: block_align: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: max_auto_readahead_size: 262144 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: prepopulate_block_cache: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: initial_auto_readahead_size: 8192 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression: NoCompression 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.num_levels: 7 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T14:35:26.721 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.inplace_update_support: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.bloom_locality: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.max_successive_merges: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.ttl: 2592000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.enable_blob_files: false 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.min_blob_size: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm03/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d863882b-b872-405f-88ea-2e6b9401d9b3 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773153326537197, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773153326539171, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773153326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d863882b-b872-405f-88ea-2e6b9401d9b3", "db_session_id": "6G60731CYY10P7P9C1HD", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773153326539287, "job": 1, "event": "recovery_finished"} 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm03/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55711f09ce00 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: DB pointer 0x55711f1a8000 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03 does not exist in monmap, will attempt to join an existing cluster 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: using public_addr v2:192.168.123.103:0/0 -> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: starting mon.vm03 rank -1 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-vm03 fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(???) e0 preinit fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: ** DB Stats ** 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: 2026-03-10T14:35:26.722 INFO:journalctl@ceph.mon.vm03.vm03.stdout: ** Compaction Stats [default] ** 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: ** Compaction Stats [default] ** 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Block cache BinnedLRUCache@0x55711f09b350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).mds e1 new map 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).mds e1 print_map 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: e1 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: btime 2026-03-10T14:34:16:500910+0000 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: legacy client fscid: -1 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout: No filesystems configured 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e5 e5: 0 total, 0 up, 0 in 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e5 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.client.admin.keyring 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.client.admin.keyring 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm03", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm03", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T14:35:26.723 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Deploying daemon ceph-exporter.vm03 on vm03 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/3046959312' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm03", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm03", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Deploying daemon crash.vm03 on vm03 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Deploying daemon node-exporter.vm03 on vm03 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/2504156379' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/3699030872' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm03.iylznd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm03.iylznd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Deploying daemon mgr.vm03.iylznd on vm03 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: Deploying daemon mon.vm03 on vm03 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/1084267991' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:26.724 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:26 vm03 ceph-mon[54091]: mon.vm03@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-10T14:35:26.803 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm03/config 2026-03-10T14:35:26.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:26 vm00 ceph-mon[47192]: Deploying daemon mon.vm03 on vm03 2026-03-10T14:35:26.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:26 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1084267991' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:31.958 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:35:31.958 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:31.958 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: mon.vm00 calling monitor election 2026-03-10T14:35:31.958 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: mon.vm03 calling monitor election 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm03.iylznd/crt"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: mon.vm00 is new leader, mons vm00,vm03 in quorum (ranks 0,1) 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: monmap epoch 2 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: last_changed 2026-03-10T14:35:26.585381+0000 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: created 2026-03-10T14:34:15.033123+0000 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: min_mon_release 19 (squid) 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: election_strategy: 1 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.vm00 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.vm03 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: fsmap 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: mgrmap e17: vm00.qkhroe(active, since 16s) 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: overall HEALTH_OK 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: Standby manager daemon vm03.iylznd started 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm03.iylznd/key"}]: dispatch 2026-03-10T14:35:31.959 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:31 vm03 ceph-mon[54091]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: mon.vm00 calling monitor election 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: mon.vm03 calling monitor election 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm03.iylznd/crt"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: mon.vm00 is new leader, mons vm00,vm03 in quorum (ranks 0,1) 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: monmap epoch 2 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: last_changed 2026-03-10T14:35:26.585381+0000 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: created 2026-03-10T14:34:15.033123+0000 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: min_mon_release 19 (squid) 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: election_strategy: 1 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.vm00 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.vm03 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: fsmap 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T14:35:32.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: mgrmap e17: vm00.qkhroe(active, since 16s) 2026-03-10T14:35:32.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: overall HEALTH_OK 2026-03-10T14:35:32.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: Standby manager daemon vm03.iylznd started 2026-03-10T14:35:32.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T14:35:32.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm03.iylznd/key"}]: dispatch 2026-03-10T14:35:32.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:31 vm00 ceph-mon[47192]: from='mgr.? 192.168.123.103:0/707316525' entity='mgr.vm03.iylznd' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T14:35:32.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: mgrmap e18: vm00.qkhroe(active, since 16s), standbys: vm03.iylznd 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr metadata", "who": "vm03.iylznd", "id": "vm03.iylznd"}]: dispatch 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:32 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:33.021 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: mgrmap e18: vm00.qkhroe(active, since 16s), standbys: vm03.iylznd 2026-03-10T14:35:33.021 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr metadata", "who": "vm03.iylznd", "id": "vm03.iylznd"}]: dispatch 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:33.022 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:32 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:35:33.103 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T14:35:33.103 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":2,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","modified":"2026-03-10T14:35:26.585381Z","created":"2026-03-10T14:34:15.033123Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm03","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T14:35:33.103 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 2 2026-03-10T14:35:33.171 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T14:35:33.171 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph config generate-minimal-conf 2026-03-10T14:35:33.426 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:33.721 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:35:33.721 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-10T14:35:33.721 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:35:33.721 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] 2026-03-10T14:35:33.807 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T14:35:33.807 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:35:33.807 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: Updating vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: Reconfiguring mon.vm00 (unknown last config time)... 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: Reconfiguring daemon mon.vm00 on vm00 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm00.qkhroe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1823272323' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:35:33.847 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:33 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:33.852 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:35:33.852 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:35:33.927 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:35:33.927 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T14:35:33.955 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:35:33.955 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: Updating vm03:/etc/ceph/ceph.conf 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: Updating vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: Updating vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/config/ceph.conf 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: Reconfiguring mon.vm00 (unknown last config time)... 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: Reconfiguring daemon mon.vm00 on vm00 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm00.qkhroe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:34.020 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/1823272323' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:35:34.021 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:33 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:34.024 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T14:35:34.024 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:35:34.024 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T14:35:34.040 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:35:34.050 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-10T14:35:34.110 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-10T14:35:34.110 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-10T14:35:34.110 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-10T14:35:34.110 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-10T14:35:34.110 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-10T14:35:34.110 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T14:35:34.110 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T14:35:34.110 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-10T14:35:34.204 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:34:45.830671989 +0000 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:33:14.185597583 +0000 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:33:14.185597583 +0000 2026-03-10T14:35:34.205 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 14:30:48.262000000 +0000 2026-03-10T14:35:34.205 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T14:35:34.241 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:35:34.242 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:35:34.242 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000118301 s, 4.3 MB/s 2026-03-10T14:35:34.243 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T14:35:34.318 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:34:45.894672076 +0000 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:33:14.225597635 +0000 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:33:14.225597635 +0000 2026-03-10T14:35:34.378 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 14:30:48.266000000 +0000 2026-03-10T14:35:34.378 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T14:35:34.446 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:35:34.446 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:35:34.446 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000143891 s, 3.6 MB/s 2026-03-10T14:35:34.447 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T14:35:34.511 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:34:45.965672173 +0000 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:33:14.199597601 +0000 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:33:14.199597601 +0000 2026-03-10T14:35:34.574 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 14:30:48.270000000 +0000 2026-03-10T14:35:34.574 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T14:35:34.663 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:35:34.663 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:35:34.663 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000319267 s, 1.6 MB/s 2026-03-10T14:35:34.664 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T14:35:34.687 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-10T14:35:34.755 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-10T14:35:34.755 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:34.755 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T14:35:34.756 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:34.756 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:34.756 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 14:34:46.015672242 +0000 2026-03-10T14:35:34.756 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 14:33:14.200597602 +0000 2026-03-10T14:35:34.756 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 14:33:14.200597602 +0000 2026-03-10T14:35:34.756 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 14:30:48.300000000 +0000 2026-03-10T14:35:34.756 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: Reconfiguring mgr.vm00.qkhroe (unknown last config time)... 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: Reconfiguring daemon mgr.vm00.qkhroe on vm00 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: Reconfiguring ceph-exporter.vm00 (monmap changed)... 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: Reconfiguring daemon ceph-exporter.vm00 on vm00 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: Reconfiguring crash.vm00 (monmap changed)... 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: Reconfiguring daemon crash.vm00 on vm00 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3727232602' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.900 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:34 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:34.906 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T14:35:34.906 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T14:35:34.906 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.00116472 s, 440 kB/s 2026-03-10T14:35:34.907 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T14:35:34.956 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:35:34.956 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T14:35:34.974 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:35:34.975 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-10T14:35:35.035 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-10T14:35:35.035 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-10T14:35:35.035 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-10T14:35:35.035 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-10T14:35:35.035 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-10T14:35:35.035 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T14:35:35.035 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T14:35:35.035 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:35:18.180713752 +0000 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:33:16.864596435 +0000 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:33:16.864596435 +0000 2026-03-10T14:35:35.096 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 14:30:17.276000000 +0000 2026-03-10T14:35:35.096 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T14:35:35.160 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: Reconfiguring mgr.vm00.qkhroe (unknown last config time)... 2026-03-10T14:35:35.160 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: Reconfiguring daemon mgr.vm00.qkhroe on vm00 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: Reconfiguring ceph-exporter.vm00 (monmap changed)... 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: Reconfiguring daemon ceph-exporter.vm00 on vm00 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: Reconfiguring crash.vm00 (monmap changed)... 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: Reconfiguring daemon crash.vm00 on vm00 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3727232602' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:34 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.162 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:35:35.162 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:35:35.162 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000155892 s, 3.3 MB/s 2026-03-10T14:35:35.163 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T14:35:35.221 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:35:18.204713757 +0000 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:33:16.886554495 +0000 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:33:16.886554495 +0000 2026-03-10T14:35:35.281 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 14:30:17.280000000 +0000 2026-03-10T14:35:35.281 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T14:35:35.349 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:35:35.349 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:35:35.349 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000205104 s, 2.5 MB/s 2026-03-10T14:35:35.350 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T14:35:35.413 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,30 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:35:18.228713762 +0000 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:33:16.864596435 +0000 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:33:16.864596435 +0000 2026-03-10T14:35:35.475 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 14:30:17.284000000 +0000 2026-03-10T14:35:35.475 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T14:35:35.543 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:35:35.544 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:35:35.544 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000197721 s, 2.6 MB/s 2026-03-10T14:35:35.545 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T14:35:35.609 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout:Device: 6h/6d Inode: 226 Links: 1 Device type: fc,40 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 14:35:18.253713768 +0000 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 14:33:16.879567840 +0000 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 14:33:16.879567840 +0000 2026-03-10T14:35:35.671 INFO:teuthology.orchestra.run.vm03.stdout: Birth: 2026-03-10 14:30:17.288000000 +0000 2026-03-10T14:35:35.672 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T14:35:35.744 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T14:35:35.745 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T14:35:35.745 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000231232 s, 2.2 MB/s 2026-03-10T14:35:35.746 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T14:35:35.806 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch apply osd --all-available-devices 2026-03-10T14:35:35.872 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:35 vm03 ceph-mon[54091]: Reconfiguring alertmanager.vm00 (dependencies changed)... 2026-03-10T14:35:35.872 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:35 vm03 ceph-mon[54091]: Reconfiguring daemon alertmanager.vm00 on vm00 2026-03-10T14:35:35.872 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:35 vm03 ceph-mon[54091]: Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T14:35:35.872 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:35 vm03 ceph-mon[54091]: Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T14:35:35.873 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:35 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.873 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:35 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:35 vm00 ceph-mon[47192]: Reconfiguring alertmanager.vm00 (dependencies changed)... 2026-03-10T14:35:35.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:35 vm00 ceph-mon[47192]: Reconfiguring daemon alertmanager.vm00 on vm00 2026-03-10T14:35:35.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:35 vm00 ceph-mon[47192]: Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T14:35:35.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:35 vm00 ceph-mon[47192]: Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T14:35:35.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:35 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:35.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:35 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:36.022 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm03/config 2026-03-10T14:35:36.273 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled osd.all-available-devices update... 2026-03-10T14:35:36.322 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T14:35:36.322 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:36.535 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: Reconfiguring prometheus.vm00 (dependencies changed)... 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: Reconfiguring daemon prometheus.vm00 on vm00 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm03", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:35:36.827 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:36 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:36.827 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:37.018 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T14:35:37.058 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: Reconfiguring prometheus.vm00 (dependencies changed)... 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: Reconfiguring daemon prometheus.vm00 on vm00 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm03", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T14:35:37.059 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:36 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: Marking host: vm00 for OSDSpec preview refresh. 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: Marking host: vm03 for OSDSpec preview refresh. 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: Saving service osd.all-available-devices spec with placement * 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: Reconfiguring ceph-exporter.vm03 (monmap changed)... 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: Reconfiguring daemon ceph-exporter.vm03 on vm03 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/702713552' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm03", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm03.iylznd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:35:37.930 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:37 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:38.019 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: Marking host: vm00 for OSDSpec preview refresh. 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: Marking host: vm03 for OSDSpec preview refresh. 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: Saving service osd.all-available-devices spec with placement * 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: Reconfiguring ceph-exporter.vm03 (monmap changed)... 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: Reconfiguring daemon ceph-exporter.vm03 on vm03 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/702713552' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm03", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm03.iylznd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T14:35:38.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T14:35:38.046 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:38.046 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:38.046 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:38.046 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T14:35:38.046 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T14:35:38.046 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:37 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:38.231 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:38.512 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:38.595 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: Reconfiguring crash.vm03 (monmap changed)... 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: Reconfiguring daemon crash.vm03 on vm03 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: Reconfiguring mgr.vm03.iylznd (monmap changed)... 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: Reconfiguring daemon mgr.vm03.iylznd on vm03 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: Reconfiguring mon.vm03 (monmap changed)... 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: Reconfiguring daemon mon.vm03 on vm03 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3014272190' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:39.227 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:39 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: Reconfiguring crash.vm03 (monmap changed)... 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: Reconfiguring daemon crash.vm03 on vm03 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: Reconfiguring mgr.vm03.iylznd (monmap changed)... 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: Reconfiguring daemon mgr.vm03.iylznd on vm03 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: Reconfiguring mon.vm03 (monmap changed)... 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: Reconfiguring daemon mon.vm03 on vm03 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3014272190' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:39.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:39 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:39.596 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:39.813 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:40.186 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:40.332 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T14:35:40.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:35:40.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:35:40.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:35:40.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:40 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T14:35:40.615 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:40 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:41.333 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3026104444' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/441321922' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1da99fc8-26cb-4d65-956c-9470c232bd2f"}]: dispatch 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1da99fc8-26cb-4d65-956c-9470c232bd2f"}]: dispatch 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1da99fc8-26cb-4d65-956c-9470c232bd2f"}]': finished 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2890379313' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0e49c4b-93c2-4b94-9a64-50505f825d61"}]: dispatch 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2890379313' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d0e49c4b-93c2-4b94-9a64-50505f825d61"}]': finished 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:41.433 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:41 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:41.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3026104444' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/441321922' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1da99fc8-26cb-4d65-956c-9470c232bd2f"}]: dispatch 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1da99fc8-26cb-4d65-956c-9470c232bd2f"}]: dispatch 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1da99fc8-26cb-4d65-956c-9470c232bd2f"}]': finished 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2890379313' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d0e49c4b-93c2-4b94-9a64-50505f825d61"}]: dispatch 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2890379313' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d0e49c4b-93c2-4b94-9a64-50505f825d61"}]': finished 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:41.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:41 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:41.563 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:41.832 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:41.889 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773153341,"num_remapped_pgs":0} 2026-03-10T14:35:42.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:42 vm03 ceph-mon[54091]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:42.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:42 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/1870710585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:42.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:42 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2523967824' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:42.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:42 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/593982068' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:42.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:42 vm00 ceph-mon[47192]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:42.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:42 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1870710585' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:42.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:42 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2523967824' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:42.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:42 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/593982068' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:42.891 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:43.077 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:43.500 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:43.573 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773153341,"num_remapped_pgs":0} 2026-03-10T14:35:44.446 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:44 vm03 ceph-mon[54091]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:44.446 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:44 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1108539161' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:44.574 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:44.599 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:44 vm00 ceph-mon[47192]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:44.599 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:44 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1108539161' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:44.788 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:45.071 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:45.156 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773153345,"num_remapped_pgs":0} 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2775640852' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f4045ece-3972-4418-8240-a1da85e47f5c"}]: dispatch 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f4045ece-3972-4418-8240-a1da85e47f5c"}]: dispatch 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f4045ece-3972-4418-8240-a1da85e47f5c"}]': finished 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: osdmap e8: 3 total, 0 up, 3 in 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3148041174' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:45.507 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1936003480' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "664b43cb-2e32-4e05-9cfe-dfbb3770ad27"}]: dispatch 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1936003480' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "664b43cb-2e32-4e05-9cfe-dfbb3770ad27"}]': finished 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: osdmap e9: 4 total, 0 up, 4 in 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:45.508 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:45 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/4001678169' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/2775640852' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f4045ece-3972-4418-8240-a1da85e47f5c"}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f4045ece-3972-4418-8240-a1da85e47f5c"}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f4045ece-3972-4418-8240-a1da85e47f5c"}]': finished 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: osdmap e8: 3 total, 0 up, 3 in 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/3148041174' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1936003480' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "664b43cb-2e32-4e05-9cfe-dfbb3770ad27"}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1936003480' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "664b43cb-2e32-4e05-9cfe-dfbb3770ad27"}]': finished 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: osdmap e9: 4 total, 0 up, 4 in 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:45.704 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:45 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/4001678169' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:46.156 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:46.366 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:46.489 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:46 vm00 ceph-mon[47192]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:46.489 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:46 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:35:46.489 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:46 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2367774297' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:46.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:46 vm03 ceph-mon[54091]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:46.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:46 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:35:46.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:46 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2367774297' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:46.794 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:46.866 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773153345,"num_remapped_pgs":0} 2026-03-10T14:35:47.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:47 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/262402832' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:47.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:47 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/262402832' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:47.867 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:48.038 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:48.292 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:48.339 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":10,"num_osds":5,"num_up_osds":0,"osd_up_since":0,"num_in_osds":5,"osd_in_since":1773153348,"num_remapped_pgs":0} 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/2869113178' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "51f511c8-c39a-4d89-af37-bd1c6feb4672"}]: dispatch 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "51f511c8-c39a-4d89-af37-bd1c6feb4672"}]: dispatch 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "51f511c8-c39a-4d89-af37-bd1c6feb4672"}]': finished 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: osdmap e10: 5 total, 0 up, 5 in 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:48.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:48.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:48.467 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:48 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1493758654' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2869113178' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "51f511c8-c39a-4d89-af37-bd1c6feb4672"}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "51f511c8-c39a-4d89-af37-bd1c6feb4672"}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "51f511c8-c39a-4d89-af37-bd1c6feb4672"}]': finished 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: osdmap e10: 5 total, 0 up, 5 in 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:48.555 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:48 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1493758654' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:49.340 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:49.534 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/2925267234' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3695531471' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4bd5b7d-19b7-44d8-bace-42e66862ebfe"}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3695531471' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4bd5b7d-19b7-44d8-bace-42e66862ebfe"}]': finished 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: osdmap e11: 6 total, 0 up, 6 in 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:35:49.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:49 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3526921478' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/2925267234' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3695531471' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d4bd5b7d-19b7-44d8-bace-42e66862ebfe"}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3695531471' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d4bd5b7d-19b7-44d8-bace-42e66862ebfe"}]': finished 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: osdmap e11: 6 total, 0 up, 6 in 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:35:49.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:49 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3526921478' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:49.767 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:49.836 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773153348,"num_remapped_pgs":0} 2026-03-10T14:35:50.836 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:50.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:50 vm03 ceph-mon[54091]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:50.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:50 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/4062093211' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:51.004 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:51.028 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:50 vm00 ceph-mon[47192]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:51.028 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:50 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/4062093211' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:51.266 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:51.372 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773153348,"num_remapped_pgs":0} 2026-03-10T14:35:51.814 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:51 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1076502181' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:51.815 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:51 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/1970755706' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c10e4d2-191b-4680-98b0-acfc97bb826f"}]: dispatch 2026-03-10T14:35:52.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:51 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1076502181' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:52.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:51 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/1970755706' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c10e4d2-191b-4680-98b0-acfc97bb826f"}]: dispatch 2026-03-10T14:35:52.373 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:52.590 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:52.771 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:52.856 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:52.949 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c10e4d2-191b-4680-98b0-acfc97bb826f"}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c10e4d2-191b-4680-98b0-acfc97bb826f"}]': finished 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: osdmap e12: 7 total, 0 up, 7 in 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:35:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/4220243225' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2795968852' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3c924dd8-ca19-48a7-afef-f0acec9d953d"}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2795968852' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3c924dd8-ca19-48a7-afef-f0acec9d953d"}]': finished 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: osdmap e13: 8 total, 0 up, 8 in 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:35:53.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:52 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:35:53.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c10e4d2-191b-4680-98b0-acfc97bb826f"}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c10e4d2-191b-4680-98b0-acfc97bb826f"}]': finished 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: osdmap e12: 7 total, 0 up, 7 in 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/4220243225' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2795968852' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3c924dd8-ca19-48a7-afef-f0acec9d953d"}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2795968852' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3c924dd8-ca19-48a7-afef-f0acec9d953d"}]': finished 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: osdmap e13: 8 total, 0 up, 8 in 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:35:53.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:52 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:35:53.950 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:54.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:53 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2824691208' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:54.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:53 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/618147818' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:54.158 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:54.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:53 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2824691208' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:54.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:53 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/618147818' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T14:35:54.411 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:54.494 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:35:55.306 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:55 vm03 ceph-mon[54091]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:55.306 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:55 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/670019428' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:55.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:55 vm00 ceph-mon[47192]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:55.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:55 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/670019428' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:55.495 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:55.695 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:55.957 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:56.043 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:56 vm00 ceph-mon[47192]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:56.043 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:56 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:35:56.043 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:56 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:56.043 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:56 vm00 ceph-mon[47192]: Deploying daemon osd.0 on vm03 2026-03-10T14:35:56.043 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:56 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/4247072811' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:56.069 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:35:56.189 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:56 vm03 ceph-mon[54091]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:56.189 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:56 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T14:35:56.189 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:56 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:56.189 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:56 vm03 ceph-mon[54091]: Deploying daemon osd.0 on vm03 2026-03-10T14:35:56.189 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:56 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/4247072811' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:57.070 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:57.108 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:57 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:35:57.123 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:57 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:57.135 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:57 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T14:35:57.136 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:57 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:57.329 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:57.823 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:58.008 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:35:58.177 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:58 vm03 ceph-mon[54091]: Deploying daemon osd.1 on vm00 2026-03-10T14:35:58.177 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:58 vm03 ceph-mon[54091]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:58.177 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:58 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3986334855' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:58.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:58 vm00 ceph-mon[47192]: Deploying daemon osd.1 on vm00 2026-03-10T14:35:58.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:58 vm00 ceph-mon[47192]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:35:58.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:58 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3986334855' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:35:59.009 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:35:59.291 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:35:59.583 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:59 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:59.583 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:59 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:59.583 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:59 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:35:59.583 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:59 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:59.583 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:35:59 vm00 ceph-mon[47192]: Deploying daemon osd.2 on vm03 2026-03-10T14:35:59.584 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:35:59.670 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:35:59.710 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:59 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:59.710 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:59 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:35:59.710 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:59 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T14:35:59.710 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:59 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:35:59.710 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:35:59 vm03 ceph-mon[54091]: Deploying daemon osd.2 on vm03 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: Deploying daemon osd.3 on vm00 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/531800238' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='osd.0 [v2:192.168.123.103:6800/648388416,v1:192.168.123.103:6801/648388416]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:36:00.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:00 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:00.671 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: Deploying daemon osd.3 on vm00 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/531800238' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='osd.0 [v2:192.168.123.103:6800/648388416,v1:192.168.123.103:6801/648388416]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T14:36:00.708 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:00 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:00.897 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:01.302 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:01.543 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":14,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: osdmap e14: 8 total, 0 up, 8 in 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='osd.0 [v2:192.168.123.103:6800/648388416,v1:192.168.123.103:6801/648388416]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:36:01.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:01 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3098929805' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:01.770 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T14:36:01.770 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: osdmap e14: 8 total, 0 up, 8 in 2026-03-10T14:36:01.770 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='osd.0 [v2:192.168.123.103:6800/648388416,v1:192.168.123.103:6801/648388416]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T14:36:01.771 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:01 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3098929805' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:02.544 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: Deploying daemon osd.4 on vm03 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: osdmap e15: 8 total, 0 up, 8 in 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:02.627 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:02.628 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:02 vm00 ceph-mon[47192]: osdmap e16: 8 total, 0 up, 8 in 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: Deploying daemon osd.4 on vm03 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: osdmap e15: 8 total, 0 up, 8 in 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: from='osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:02.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:02 vm03 ceph-mon[54091]: osdmap e16: 8 total, 0 up, 8 in 2026-03-10T14:36:02.805 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:03.252 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:03.329 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":16,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: Deploying daemon osd.5 on vm00 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:03.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='osd.0 ' entity='osd.0' 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='osd.2 [v2:192.168.123.103:6808/1063018866,v1:192.168.123.103:6809/1063018866]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1279329453' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:03.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:03 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: Deploying daemon osd.5 on vm00 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='osd.0 ' entity='osd.0' 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='osd.2 [v2:192.168.123.103:6808/1063018866,v1:192.168.123.103:6809/1063018866]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1279329453' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:03.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:03 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:04.330 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:04.625 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045] boot 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: osd.0 [v2:192.168.123.103:6800/648388416,v1:192.168.123.103:6801/648388416] boot 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: osdmap e17: 8 total, 2 up, 8 in 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='osd.2 [v2:192.168.123.103:6808/1063018866,v1:192.168.123.103:6809/1063018866]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: Deploying daemon osd.6 on vm03 2026-03-10T14:36:04.695 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:04 vm00 ceph-mon[47192]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: osd.1 [v2:192.168.123.100:6802/2753773045,v1:192.168.123.100:6803/2753773045] boot 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: osd.0 [v2:192.168.123.103:6800/648388416,v1:192.168.123.103:6801/648388416] boot 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: osdmap e17: 8 total, 2 up, 8 in 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='osd.2 [v2:192.168.123.103:6808/1063018866,v1:192.168.123.103:6809/1063018866]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: Deploying daemon osd.6 on vm03 2026-03-10T14:36:04.713 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:04 vm03 ceph-mon[54091]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T14:36:04.913 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:04.996 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":18,"num_osds":8,"num_up_osds":2,"osd_up_since":1773153363,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: osdmap e18: 8 total, 2 up, 8 in 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: Deploying daemon osd.7 on vm00 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3705093877' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: osd.2 [v2:192.168.123.103:6808/1063018866,v1:192.168.123.103:6809/1063018866] boot 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: osdmap e19: 8 total, 3 up, 8 in 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:05.736 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:05.737 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:05 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: osdmap e18: 8 total, 2 up, 8 in 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: Deploying daemon osd.7 on vm00 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3705093877' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: osd.2 [v2:192.168.123.103:6808/1063018866,v1:192.168.123.103:6809/1063018866] boot 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: osdmap e19: 8 total, 3 up, 8 in 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:05.896 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:05 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:05.997 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:06.253 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='osd.4 [v2:192.168.123.103:6816/3999228895,v1:192.168.123.103:6817/3999228895]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:36:06.662 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:06 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:06.662 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:06.722 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":20,"num_osds":8,"num_up_osds":4,"osd_up_since":1773153366,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: pgmap v31: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='osd.4 [v2:192.168.123.103:6816/3999228895,v1:192.168.123.103:6817/3999228895]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T14:36:06.834 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:06 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:07.724 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679] boot 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: osdmap e20: 8 total, 4 up, 8 in 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:07.756 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.4 [v2:192.168.123.103:6816/3999228895,v1:192.168.123.103:6817/3999228895]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3907546559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:07.757 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:07 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: osd.3 [v2:192.168.123.100:6810/994459679,v1:192.168.123.100:6811/994459679] boot 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: osdmap e20: 8 total, 4 up, 8 in 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.4 [v2:192.168.123.103:6816/3999228895,v1:192.168.123.103:6817/3999228895]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3907546559' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:07.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:07 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T14:36:08.076 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:08.385 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:08.518 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":21,"num_osds":8,"num_up_osds":4,"osd_up_since":1773153366,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: pgmap v34: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: osdmap e21: 8 total, 4 up, 8 in 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:36:08.934 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='osd.6 [v2:192.168.123.103:6824/1507024572,v1:192.168.123.103:6825/1507024572]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2779731685' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='osd.6 [v2:192.168.123.103:6824/1507024572,v1:192.168.123.103:6825/1507024572]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: osdmap e22: 8 total, 4 up, 8 in 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:08.935 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:08 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:08.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: pgmap v34: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T14:36:08.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: osdmap e21: 8 total, 4 up, 8 in 2026-03-10T14:36:08.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='osd.6 [v2:192.168.123.103:6824/1507024572,v1:192.168.123.103:6825/1507024572]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2779731685' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='osd.6 [v2:192.168.123.103:6824/1507024572,v1:192.168.123.103:6825/1507024572]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: osdmap e22: 8 total, 4 up, 8 in 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:08.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:08 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:09.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 sudo[64326]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T14:36:09.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 sudo[64326]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T14:36:09.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 sudo[64326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T14:36:09.466 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 sudo[64326]: pam_unix(sudo:session): session closed for user root 2026-03-10T14:36:09.519 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:09.548 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 sudo[71172]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T14:36:09.548 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 sudo[71172]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T14:36:09.548 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 sudo[71172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T14:36:09.548 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 sudo[71172]: pam_unix(sudo:session): session closed for user root 2026-03-10T14:36:09.798 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='osd.4 ' entity='osd.4' 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: osd.4 [v2:192.168.123.103:6816/3999228895,v1:192.168.123.103:6817/3999228895] boot 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869] boot 2026-03-10T14:36:09.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: osdmap e23: 8 total, 6 up, 8 in 2026-03-10T14:36:09.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:09.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:09.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:09.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:09.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:09.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:09 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='osd.4 ' entity='osd.4' 2026-03-10T14:36:09.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869]' entity='osd.5' 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mon metadata", "id": "vm03"}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: osd.4 [v2:192.168.123.103:6816/3999228895,v1:192.168.123.103:6817/3999228895] boot 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: osd.5 [v2:192.168.123.100:6818/1532812869,v1:192.168.123.100:6819/1532812869] boot 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: osdmap e23: 8 total, 6 up, 8 in 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:09.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:09 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:10.080 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:10.157 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":23,"num_osds":8,"num_up_osds":6,"osd_up_since":1773153369,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:10.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 105 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/4110428880' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: mgrmap e19: vm00.qkhroe(active, since 55s), standbys: vm03.iylznd 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='osd.6 ' entity='osd.6' 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: osdmap e24: 8 total, 6 up, 8 in 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:10.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:10 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 105 MiB used, 80 GiB / 80 GiB avail 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/4110428880' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: mgrmap e19: vm00.qkhroe(active, since 55s), standbys: vm03.iylznd 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='osd.6 ' entity='osd.6' 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: osdmap e24: 8 total, 6 up, 8 in 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:10.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:10 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:11.158 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:11.506 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: osd.6 [v2:192.168.123.103:6824/1507024572,v1:192.168.123.103:6825/1507024572] boot 2026-03-10T14:36:11.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: osdmap e25: 8 total, 7 up, 8 in 2026-03-10T14:36:11.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:11.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:11 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:11.870 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:11.971 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":25,"num_osds":8,"num_up_osds":7,"osd_up_since":1773153371,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:12.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: osd.6 [v2:192.168.123.103:6824/1507024572,v1:192.168.123.103:6825/1507024572] boot 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: osdmap e25: 8 total, 7 up, 8 in 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T14:36:12.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:11 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:12.972 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd stat -f json 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: purged_snaps scrub starts 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: purged_snaps scrub ok 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: Detected new or changed devices on vm03 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 559 MiB used, 119 GiB / 120 GiB avail 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/783245211' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340] boot 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: osdmap e26: 8 total, 8 up, 8 in 2026-03-10T14:36:13.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:12 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:13.174 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:13.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: purged_snaps scrub starts 2026-03-10T14:36:13.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: purged_snaps scrub ok 2026-03-10T14:36:13.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: Detected new or changed devices on vm03 2026-03-10T14:36:13.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 559 MiB used, 119 GiB / 120 GiB avail 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/783245211' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340]' entity='osd.7' 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: osd.7 [v2:192.168.123.100:6826/2442857340,v1:192.168.123.100:6827/2442857340] boot 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: osdmap e26: 8 total, 8 up, 8 in 2026-03-10T14:36:13.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:12 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T14:36:13.418 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:13.466 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":27,"num_osds":8,"num_up_osds":8,"osd_up_since":1773153372,"num_in_osds":8,"osd_in_since":1773153352,"num_remapped_pgs":0} 2026-03-10T14:36:13.466 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd dump --format=json 2026-03-10T14:36:13.654 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:13.909 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:13.909 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":27,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","created":"2026-03-10T14:34:16.501295+0000","modified":"2026-03-10T14:36:12.929767+0000","last_up_change":"2026-03-10T14:36:12.613387+0000","last_in_change":"2026-03-10T14:35:52.456392+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":13,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T14:36:07.357621+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"1da99fc8-26cb-4d65-956c-9470c232bd2f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":26,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6801","nonce":648388416}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6803","nonce":648388416}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6807","nonce":648388416}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6805","nonce":648388416}]},"public_addr":"192.168.123.103:6801/648388416","cluster_addr":"192.168.123.103:6803/648388416","heartbeat_back_addr":"192.168.123.103:6807/648388416","heartbeat_front_addr":"192.168.123.103:6805/648388416","state":["exists","up"]},{"osd":1,"uuid":"d0e49c4b-93c2-4b94-9a64-50505f825d61","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6803","nonce":2753773045}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6805","nonce":2753773045}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6809","nonce":2753773045}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6807","nonce":2753773045}]},"public_addr":"192.168.123.100:6803/2753773045","cluster_addr":"192.168.123.100:6805/2753773045","heartbeat_back_addr":"192.168.123.100:6809/2753773045","heartbeat_front_addr":"192.168.123.100:6807/2753773045","state":["exists","up"]},{"osd":2,"uuid":"f4045ece-3972-4418-8240-a1da85e47f5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6809","nonce":1063018866}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6811","nonce":1063018866}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6815","nonce":1063018866}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6813","nonce":1063018866}]},"public_addr":"192.168.123.103:6809/1063018866","cluster_addr":"192.168.123.103:6811/1063018866","heartbeat_back_addr":"192.168.123.103:6815/1063018866","heartbeat_front_addr":"192.168.123.103:6813/1063018866","state":["exists","up"]},{"osd":3,"uuid":"664b43cb-2e32-4e05-9cfe-dfbb3770ad27","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6811","nonce":994459679}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6813","nonce":994459679}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6817","nonce":994459679}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6815","nonce":994459679}]},"public_addr":"192.168.123.100:6811/994459679","cluster_addr":"192.168.123.100:6813/994459679","heartbeat_back_addr":"192.168.123.100:6817/994459679","heartbeat_front_addr":"192.168.123.100:6815/994459679","state":["exists","up"]},{"osd":4,"uuid":"51f511c8-c39a-4d89-af37-bd1c6feb4672","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6817","nonce":3999228895}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6819","nonce":3999228895}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6823","nonce":3999228895}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6821","nonce":3999228895}]},"public_addr":"192.168.123.103:6817/3999228895","cluster_addr":"192.168.123.103:6819/3999228895","heartbeat_back_addr":"192.168.123.103:6823/3999228895","heartbeat_front_addr":"192.168.123.103:6821/3999228895","state":["exists","up"]},{"osd":5,"uuid":"d4bd5b7d-19b7-44d8-bace-42e66862ebfe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6819","nonce":1532812869}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6821","nonce":1532812869}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6825","nonce":1532812869}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6823","nonce":1532812869}]},"public_addr":"192.168.123.100:6819/1532812869","cluster_addr":"192.168.123.100:6821/1532812869","heartbeat_back_addr":"192.168.123.100:6825/1532812869","heartbeat_front_addr":"192.168.123.100:6823/1532812869","state":["exists","up"]},{"osd":6,"uuid":"8c10e4d2-191b-4680-98b0-acfc97bb826f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6825","nonce":1507024572}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6827","nonce":1507024572}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6830","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6831","nonce":1507024572}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6829","nonce":1507024572}]},"public_addr":"192.168.123.103:6825/1507024572","cluster_addr":"192.168.123.103:6827/1507024572","heartbeat_back_addr":"192.168.123.103:6831/1507024572","heartbeat_front_addr":"192.168.123.103:6829/1507024572","state":["exists","up"]},{"osd":7,"uuid":"3c924dd8-ca19-48a7-afef-f0acec9d953d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6827","nonce":2442857340}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6829","nonce":2442857340}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6833","nonce":2442857340}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6831","nonce":2442857340}]},"public_addr":"192.168.123.100:6827/2442857340","cluster_addr":"192.168.123.100:6829/2442857340","heartbeat_back_addr":"192.168.123.100:6833/2442857340","heartbeat_front_addr":"192.168.123.100:6831/2442857340","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:01.050275+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:02.013439+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:03.847023+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:04.990141+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:06.785749+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:07.465592+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:09.117464+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/404263654":"2026-03-11T14:35:15.303818+0000","192.168.123.100:6801/582169616":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/652711078":"2026-03-11T14:35:15.303818+0000","192.168.123.100:0/992115257":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/746697533":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/2367920973":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/2870528121":"2026-03-11T14:34:41.235267+0000","192.168.123.100:0/2748887809":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6800/3959335683":"2026-03-11T14:34:41.235267+0000","192.168.123.100:0/3179955365":"2026-03-11T14:35:15.303818+0000","192.168.123.100:0/3077629101":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6800/582169616":"2026-03-11T14:34:28.514282+0000","192.168.123.100:6800/2024704812":"2026-03-11T14:35:15.303818+0000","192.168.123.100:6801/3959335683":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6801/2024704812":"2026-03-11T14:35:15.303818+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T14:36:14.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:13 vm00 ceph-mon[47192]: Detected new or changed devices on vm00 2026-03-10T14:36:14.077 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:13 vm00 ceph-mon[47192]: osdmap e27: 8 total, 8 up, 8 in 2026-03-10T14:36:14.077 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:13 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1311945490' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:14.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:13 vm03 ceph-mon[54091]: Detected new or changed devices on vm00 2026-03-10T14:36:14.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:13 vm03 ceph-mon[54091]: osdmap e27: 8 total, 8 up, 8 in 2026-03-10T14:36:14.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:13 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1311945490' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T14:36:14.246 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T14:36:07.357621+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '23', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T14:36:14.246 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd pool get .mgr pg_num 2026-03-10T14:36:14.421 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:14.657 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-10T14:36:14.730 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T14:36:14.730 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T14:36:14.920 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:14.980 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:14 vm00 ceph-mon[47192]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 986 MiB used, 139 GiB / 140 GiB avail 2026-03-10T14:36:14.981 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:14 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2012420333' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:36:14.981 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:14 vm00 ceph-mon[47192]: osdmap e28: 8 total, 8 up, 8 in 2026-03-10T14:36:14.981 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:14 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/82250099' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:36:15.203 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-10T14:36:15.204 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBfLLBp/0HMCxAAnCtnr9I2gc+agdt1M+XBmA== 2026-03-10T14:36:15.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:14 vm03 ceph-mon[54091]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 986 MiB used, 139 GiB / 140 GiB avail 2026-03-10T14:36:15.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:14 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2012420333' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:36:15.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:14 vm03 ceph-mon[54091]: osdmap e28: 8 total, 8 up, 8 in 2026-03-10T14:36:15.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:14 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/82250099' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T14:36:15.257 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T14:36:15.257 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T14:36:15.257 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T14:36:15.298 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T14:36:15.487 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm03/config 2026-03-10T14:36:15.781 INFO:teuthology.orchestra.run.vm03.stdout:[client.1] 2026-03-10T14:36:15.781 INFO:teuthology.orchestra.run.vm03.stdout: key = AQBfLLBplioyLhAAsUl4P9cz6QqmLyP5hTc/9A== 2026-03-10T14:36:15.838 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:15 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1018367858' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:36:15.838 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:15 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1018367858' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:36:15.838 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:15 vm03 ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:15.838 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:15 vm03 ceph-mon[54091]: from='client.? 192.168.123.103:0/3861015292' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:36:15.838 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:15 vm03 ceph-mon[54091]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:36:15.838 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:15 vm03 ceph-mon[54091]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:36:15.838 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T14:36:15.838 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T14:36:15.838 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T14:36:15.879 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T14:36:15.879 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T14:36:15.879 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mgr dump --format=json 2026-03-10T14:36:16.054 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:16.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:15 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1018367858' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:36:16.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:15 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1018367858' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:36:16.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:15 vm00 ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:16.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:15 vm00 ceph-mon[47192]: from='client.? 192.168.123.103:0/3861015292' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:36:16.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:15 vm00 ceph-mon[47192]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T14:36:16.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:15 vm00 ceph-mon[47192]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T14:36:16.473 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:16.531 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":19,"flags":0,"active_gid":14221,"active_name":"vm00.qkhroe","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":2798747162},{"type":"v1","addr":"192.168.123.100:6801","nonce":2798747162}]},"active_addr":"192.168.123.100:6801/2798747162","active_change":"2026-03-10T14:35:15.304084+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14246,"name":"vm03.iylznd","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/","prometheus":"http://192.168.123.100:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3859785316}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3373191273}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1735634344}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1620697716}]}]} 2026-03-10T14:36:16.534 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T14:36:16.534 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T14:36:16.534 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd dump --format=json 2026-03-10T14:36:16.729 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:16.960 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:16.961 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":28,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","created":"2026-03-10T14:34:16.501295+0000","modified":"2026-03-10T14:36:13.997238+0000","last_up_change":"2026-03-10T14:36:12.613387+0000","last_in_change":"2026-03-10T14:35:52.456392+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":13,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T14:36:07.357621+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"1da99fc8-26cb-4d65-956c-9470c232bd2f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":26,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6801","nonce":648388416}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6803","nonce":648388416}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6807","nonce":648388416}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6805","nonce":648388416}]},"public_addr":"192.168.123.103:6801/648388416","cluster_addr":"192.168.123.103:6803/648388416","heartbeat_back_addr":"192.168.123.103:6807/648388416","heartbeat_front_addr":"192.168.123.103:6805/648388416","state":["exists","up"]},{"osd":1,"uuid":"d0e49c4b-93c2-4b94-9a64-50505f825d61","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6803","nonce":2753773045}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6805","nonce":2753773045}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6809","nonce":2753773045}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6807","nonce":2753773045}]},"public_addr":"192.168.123.100:6803/2753773045","cluster_addr":"192.168.123.100:6805/2753773045","heartbeat_back_addr":"192.168.123.100:6809/2753773045","heartbeat_front_addr":"192.168.123.100:6807/2753773045","state":["exists","up"]},{"osd":2,"uuid":"f4045ece-3972-4418-8240-a1da85e47f5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6809","nonce":1063018866}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6811","nonce":1063018866}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6815","nonce":1063018866}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6813","nonce":1063018866}]},"public_addr":"192.168.123.103:6809/1063018866","cluster_addr":"192.168.123.103:6811/1063018866","heartbeat_back_addr":"192.168.123.103:6815/1063018866","heartbeat_front_addr":"192.168.123.103:6813/1063018866","state":["exists","up"]},{"osd":3,"uuid":"664b43cb-2e32-4e05-9cfe-dfbb3770ad27","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6811","nonce":994459679}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6813","nonce":994459679}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6817","nonce":994459679}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6815","nonce":994459679}]},"public_addr":"192.168.123.100:6811/994459679","cluster_addr":"192.168.123.100:6813/994459679","heartbeat_back_addr":"192.168.123.100:6817/994459679","heartbeat_front_addr":"192.168.123.100:6815/994459679","state":["exists","up"]},{"osd":4,"uuid":"51f511c8-c39a-4d89-af37-bd1c6feb4672","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6817","nonce":3999228895}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6819","nonce":3999228895}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6823","nonce":3999228895}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6821","nonce":3999228895}]},"public_addr":"192.168.123.103:6817/3999228895","cluster_addr":"192.168.123.103:6819/3999228895","heartbeat_back_addr":"192.168.123.103:6823/3999228895","heartbeat_front_addr":"192.168.123.103:6821/3999228895","state":["exists","up"]},{"osd":5,"uuid":"d4bd5b7d-19b7-44d8-bace-42e66862ebfe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6819","nonce":1532812869}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6821","nonce":1532812869}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6825","nonce":1532812869}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6823","nonce":1532812869}]},"public_addr":"192.168.123.100:6819/1532812869","cluster_addr":"192.168.123.100:6821/1532812869","heartbeat_back_addr":"192.168.123.100:6825/1532812869","heartbeat_front_addr":"192.168.123.100:6823/1532812869","state":["exists","up"]},{"osd":6,"uuid":"8c10e4d2-191b-4680-98b0-acfc97bb826f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6825","nonce":1507024572}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6827","nonce":1507024572}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6830","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6831","nonce":1507024572}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6829","nonce":1507024572}]},"public_addr":"192.168.123.103:6825/1507024572","cluster_addr":"192.168.123.103:6827/1507024572","heartbeat_back_addr":"192.168.123.103:6831/1507024572","heartbeat_front_addr":"192.168.123.103:6829/1507024572","state":["exists","up"]},{"osd":7,"uuid":"3c924dd8-ca19-48a7-afef-f0acec9d953d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6827","nonce":2442857340}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6829","nonce":2442857340}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6833","nonce":2442857340}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6831","nonce":2442857340}]},"public_addr":"192.168.123.100:6827/2442857340","cluster_addr":"192.168.123.100:6829/2442857340","heartbeat_back_addr":"192.168.123.100:6833/2442857340","heartbeat_front_addr":"192.168.123.100:6831/2442857340","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:01.050275+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:02.013439+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:03.847023+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:04.990141+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:06.785749+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:07.465592+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:09.117464+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:10.580430+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/404263654":"2026-03-11T14:35:15.303818+0000","192.168.123.100:6801/582169616":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/652711078":"2026-03-11T14:35:15.303818+0000","192.168.123.100:0/992115257":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/746697533":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/2367920973":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/2870528121":"2026-03-11T14:34:41.235267+0000","192.168.123.100:0/2748887809":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6800/3959335683":"2026-03-11T14:34:41.235267+0000","192.168.123.100:0/3179955365":"2026-03-11T14:35:15.303818+0000","192.168.123.100:0/3077629101":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6800/582169616":"2026-03-11T14:34:28.514282+0000","192.168.123.100:6800/2024704812":"2026-03-11T14:35:15.303818+0000","192.168.123.100:6801/3959335683":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6801/2024704812":"2026-03-11T14:35:15.303818+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T14:36:17.008 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:16 vm00 ceph-mon[47192]: pgmap v46: 1 pgs: 1 peering; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:36:17.008 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:16 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1367989382' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:36:17.020 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T14:36:17.020 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd dump --format=json 2026-03-10T14:36:17.201 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:17.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:16 vm03 ceph-mon[54091]: pgmap v46: 1 pgs: 1 peering; 449 KiB data, 1013 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:36:17.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:16 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1367989382' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T14:36:17.453 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:17.586 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":28,"fsid":"14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf","created":"2026-03-10T14:34:16.501295+0000","modified":"2026-03-10T14:36:13.997238+0000","last_up_change":"2026-03-10T14:36:12.613387+0000","last_in_change":"2026-03-10T14:35:52.456392+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":13,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T14:36:07.357621+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"23","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"1da99fc8-26cb-4d65-956c-9470c232bd2f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":26,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6801","nonce":648388416}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6803","nonce":648388416}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6807","nonce":648388416}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":648388416},{"type":"v1","addr":"192.168.123.103:6805","nonce":648388416}]},"public_addr":"192.168.123.103:6801/648388416","cluster_addr":"192.168.123.103:6803/648388416","heartbeat_back_addr":"192.168.123.103:6807/648388416","heartbeat_front_addr":"192.168.123.103:6805/648388416","state":["exists","up"]},{"osd":1,"uuid":"d0e49c4b-93c2-4b94-9a64-50505f825d61","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":17,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6803","nonce":2753773045}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6805","nonce":2753773045}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6809","nonce":2753773045}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":2753773045},{"type":"v1","addr":"192.168.123.100:6807","nonce":2753773045}]},"public_addr":"192.168.123.100:6803/2753773045","cluster_addr":"192.168.123.100:6805/2753773045","heartbeat_back_addr":"192.168.123.100:6809/2753773045","heartbeat_front_addr":"192.168.123.100:6807/2753773045","state":["exists","up"]},{"osd":2,"uuid":"f4045ece-3972-4418-8240-a1da85e47f5c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6809","nonce":1063018866}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6810","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6811","nonce":1063018866}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6814","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6815","nonce":1063018866}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6812","nonce":1063018866},{"type":"v1","addr":"192.168.123.103:6813","nonce":1063018866}]},"public_addr":"192.168.123.103:6809/1063018866","cluster_addr":"192.168.123.103:6811/1063018866","heartbeat_back_addr":"192.168.123.103:6815/1063018866","heartbeat_front_addr":"192.168.123.103:6813/1063018866","state":["exists","up"]},{"osd":3,"uuid":"664b43cb-2e32-4e05-9cfe-dfbb3770ad27","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6811","nonce":994459679}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6813","nonce":994459679}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6817","nonce":994459679}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":994459679},{"type":"v1","addr":"192.168.123.100:6815","nonce":994459679}]},"public_addr":"192.168.123.100:6811/994459679","cluster_addr":"192.168.123.100:6813/994459679","heartbeat_back_addr":"192.168.123.100:6817/994459679","heartbeat_front_addr":"192.168.123.100:6815/994459679","state":["exists","up"]},{"osd":4,"uuid":"51f511c8-c39a-4d89-af37-bd1c6feb4672","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6816","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6817","nonce":3999228895}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6818","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6819","nonce":3999228895}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6822","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6823","nonce":3999228895}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6820","nonce":3999228895},{"type":"v1","addr":"192.168.123.103:6821","nonce":3999228895}]},"public_addr":"192.168.123.103:6817/3999228895","cluster_addr":"192.168.123.103:6819/3999228895","heartbeat_back_addr":"192.168.123.103:6823/3999228895","heartbeat_front_addr":"192.168.123.103:6821/3999228895","state":["exists","up"]},{"osd":5,"uuid":"d4bd5b7d-19b7-44d8-bace-42e66862ebfe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":23,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6819","nonce":1532812869}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6821","nonce":1532812869}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6825","nonce":1532812869}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":1532812869},{"type":"v1","addr":"192.168.123.100:6823","nonce":1532812869}]},"public_addr":"192.168.123.100:6819/1532812869","cluster_addr":"192.168.123.100:6821/1532812869","heartbeat_back_addr":"192.168.123.100:6825/1532812869","heartbeat_front_addr":"192.168.123.100:6823/1532812869","state":["exists","up"]},{"osd":6,"uuid":"8c10e4d2-191b-4680-98b0-acfc97bb826f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":25,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6824","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6825","nonce":1507024572}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6826","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6827","nonce":1507024572}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6830","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6831","nonce":1507024572}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6828","nonce":1507024572},{"type":"v1","addr":"192.168.123.103:6829","nonce":1507024572}]},"public_addr":"192.168.123.103:6825/1507024572","cluster_addr":"192.168.123.103:6827/1507024572","heartbeat_back_addr":"192.168.123.103:6831/1507024572","heartbeat_front_addr":"192.168.123.103:6829/1507024572","state":["exists","up"]},{"osd":7,"uuid":"3c924dd8-ca19-48a7-afef-f0acec9d953d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6827","nonce":2442857340}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6829","nonce":2442857340}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6833","nonce":2442857340}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":2442857340},{"type":"v1","addr":"192.168.123.100:6831","nonce":2442857340}]},"public_addr":"192.168.123.100:6827/2442857340","cluster_addr":"192.168.123.100:6829/2442857340","heartbeat_back_addr":"192.168.123.100:6833/2442857340","heartbeat_front_addr":"192.168.123.100:6831/2442857340","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:01.050275+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:02.013439+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:03.847023+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:04.990141+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:06.785749+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:07.465592+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:09.117464+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T14:36:10.580430+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/404263654":"2026-03-11T14:35:15.303818+0000","192.168.123.100:6801/582169616":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/652711078":"2026-03-11T14:35:15.303818+0000","192.168.123.100:0/992115257":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/746697533":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/2367920973":"2026-03-11T14:34:28.514282+0000","192.168.123.100:0/2870528121":"2026-03-11T14:34:41.235267+0000","192.168.123.100:0/2748887809":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6800/3959335683":"2026-03-11T14:34:41.235267+0000","192.168.123.100:0/3179955365":"2026-03-11T14:35:15.303818+0000","192.168.123.100:0/3077629101":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6800/582169616":"2026-03-11T14:34:28.514282+0000","192.168.123.100:6800/2024704812":"2026-03-11T14:35:15.303818+0000","192.168.123.100:6801/3959335683":"2026-03-11T14:34:41.235267+0000","192.168.123.100:6801/2024704812":"2026-03-11T14:35:15.303818+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T14:36:17.743 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.0 flush_pg_stats 2026-03-10T14:36:17.743 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.1 flush_pg_stats 2026-03-10T14:36:17.743 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.2 flush_pg_stats 2026-03-10T14:36:17.743 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.3 flush_pg_stats 2026-03-10T14:36:17.743 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.4 flush_pg_stats 2026-03-10T14:36:17.743 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.5 flush_pg_stats 2026-03-10T14:36:17.744 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.6 flush_pg_stats 2026-03-10T14:36:17.744 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph tell osd.7 flush_pg_stats 2026-03-10T14:36:18.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:17 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2809579654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:36:18.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:17 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/201558326' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:36:18.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:17 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2809579654' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:36:18.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:17 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/201558326' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T14:36:18.559 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.559 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.575 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.595 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.627 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.809 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.809 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.841 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:18.928 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:18 vm00 ceph-mon[47192]: pgmap v47: 1 pgs: 1 peering; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:36:19.163 INFO:teuthology.orchestra.run.vm00.stdout:81604378628 2026-03-10T14:36:19.163 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.2 2026-03-10T14:36:19.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:18 vm03 ceph-mon[54091]: pgmap v47: 1 pgs: 1 peering; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T14:36:19.384 INFO:teuthology.orchestra.run.vm00.stdout:85899345924 2026-03-10T14:36:19.384 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.3 2026-03-10T14:36:19.604 INFO:teuthology.orchestra.run.vm00.stdout:98784247811 2026-03-10T14:36:19.604 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.4 2026-03-10T14:36:19.639 INFO:teuthology.orchestra.run.vm00.stdout:98784247811 2026-03-10T14:36:19.644 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.5 2026-03-10T14:36:19.678 INFO:teuthology.orchestra.run.vm00.stdout:73014444036 2026-03-10T14:36:19.678 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.1 2026-03-10T14:36:19.748 INFO:teuthology.orchestra.run.vm00.stdout:107374182403 2026-03-10T14:36:19.748 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.6 2026-03-10T14:36:19.767 INFO:teuthology.orchestra.run.vm00.stdout:111669149699 2026-03-10T14:36:19.767 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.7 2026-03-10T14:36:19.770 INFO:teuthology.orchestra.run.vm00.stdout:73014444038 2026-03-10T14:36:19.770 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.0 2026-03-10T14:36:19.816 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.253 INFO:teuthology.orchestra.run.vm00.stdout:81604378628 2026-03-10T14:36:20.625 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378628 got 81604378628 for osd.2 2026-03-10T14:36:20.625 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:20.675 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.683 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.737 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.743 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.769 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.813 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:20 vm00 ceph-mon[47192]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-10T14:36:20.813 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:20 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1716127292' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:36:20.853 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:20.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:20 vm03 ceph-mon[54091]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 67 KiB/s, 0 objects/s recovering 2026-03-10T14:36:20.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:20 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1716127292' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T14:36:20.996 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:21.366 INFO:teuthology.orchestra.run.vm00.stdout:73014444036 2026-03-10T14:36:21.496 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444038 got 73014444036 for osd.0 2026-03-10T14:36:21.574 INFO:teuthology.orchestra.run.vm00.stdout:111669149698 2026-03-10T14:36:21.650 INFO:teuthology.orchestra.run.vm00.stdout:85899345924 2026-03-10T14:36:21.718 INFO:teuthology.orchestra.run.vm00.stdout:98784247811 2026-03-10T14:36:21.751 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149699 got 111669149698 for osd.7 2026-03-10T14:36:21.775 INFO:teuthology.orchestra.run.vm00.stdout:98784247810 2026-03-10T14:36:21.784 INFO:teuthology.orchestra.run.vm00.stdout:73014444035 2026-03-10T14:36:21.858 INFO:teuthology.orchestra.run.vm00.stdout:107374182403 2026-03-10T14:36:21.910 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1803022764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:36:21.910 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1739907245' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:36:21.910 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2428551994' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:36:21.910 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2342569436' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:36:21.910 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1839809777' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:36:21.910 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:21 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/311847806' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:36:21.920 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345924 got 85899345924 for osd.3 2026-03-10T14:36:21.920 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:21.936 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247811 got 98784247810 for osd.5 2026-03-10T14:36:21.979 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444036 got 73014444035 for osd.1 2026-03-10T14:36:21.986 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247811 got 98784247811 for osd.4 2026-03-10T14:36:21.986 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:22.001 INFO:tasks.cephadm.ceph_manager.ceph:need seq 107374182403 got 107374182403 for osd.6 2026-03-10T14:36:22.001 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:22.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:21 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1803022764' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:36:22.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:21 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1739907245' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:36:22.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:21 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2428551994' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T14:36:22.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:21 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2342569436' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T14:36:22.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:21 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1839809777' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:36:22.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:21 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/311847806' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:36:22.497 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.0 2026-03-10T14:36:22.686 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:22.752 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.7 2026-03-10T14:36:22.936 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.5 2026-03-10T14:36:22.940 INFO:teuthology.orchestra.run.vm00.stdout:73014444038 2026-03-10T14:36:22.979 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph osd last-stat-seq osd.1 2026-03-10T14:36:23.001 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:23.037 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:22 vm00 ceph-mon[47192]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-10T14:36:23.037 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:22 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1258057707' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:36:23.038 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444038 got 73014444038 for osd.0 2026-03-10T14:36:23.038 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:23.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:22 vm03 ceph-mon[54091]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-10T14:36:23.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:22 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1258057707' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T14:36:23.364 INFO:teuthology.orchestra.run.vm00.stdout:111669149699 2026-03-10T14:36:23.469 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:23.475 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:23.490 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149699 got 111669149699 for osd.7 2026-03-10T14:36:23.490 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:23.745 INFO:teuthology.orchestra.run.vm00.stdout:98784247812 2026-03-10T14:36:23.747 INFO:teuthology.orchestra.run.vm00.stdout:73014444037 2026-03-10T14:36:23.799 INFO:tasks.cephadm.ceph_manager.ceph:need seq 98784247811 got 98784247812 for osd.5 2026-03-10T14:36:23.799 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:23.851 INFO:tasks.cephadm.ceph_manager.ceph:need seq 73014444036 got 73014444037 for osd.1 2026-03-10T14:36:23.851 DEBUG:teuthology.parallel:result is None 2026-03-10T14:36:23.851 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T14:36:23.851 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph pg dump --format=json 2026-03-10T14:36:24.053 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:24.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:23 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1105751395' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:36:24.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:23 vm00 ceph-mon[47192]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:36:24.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:23 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2619803336' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:36:24.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:23 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/2924163361' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:36:24.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:23 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/1880827334' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:36:24.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:23 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1105751395' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T14:36:24.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:23 vm03 ceph-mon[54091]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 45 KiB/s, 0 objects/s recovering 2026-03-10T14:36:24.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:23 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2619803336' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T14:36:24.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:23 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/2924163361' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T14:36:24.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:23 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/1880827334' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T14:36:24.286 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:24.286 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T14:36:24.363 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":50,"stamp":"2026-03-10T14:36:23.324322+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":4,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218208,"kb_used_data":3468,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521184,"statfs":{"total":171765137408,"available":171541692416,"internally_reserved":0,"allocated":3551232,"data_stored":2203272,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"10.001511"},"pg_stats":[{"pgid":"1.0","version":"22'32","reported_seq":17,"reported_epoch":28,"state":"active+clean","last_fresh":"2026-03-10T14:36:14.220271+0000","last_change":"2026-03-10T14:36:14.220271+0000","last_active":"2026-03-10T14:36:14.220271+0000","last_peered":"2026-03-10T14:36:14.220271+0000","last_clean":"2026-03-10T14:36:14.220271+0000","last_became_active":"2026-03-10T14:36:14.214237+0000","last_became_peered":"2026-03-10T14:36:14.214237+0000","last_unstale":"2026-03-10T14:36:14.220271+0000","last_undegraded":"2026-03-10T14:36:14.220271+0000","last_fullsized":"2026-03-10T14:36:14.220271+0000","mapping_epoch":27,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":28,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:36:07.574781+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:36:07.574781+0000","last_clean_scrub_stamp":"2026-03-10T14:36:07.574781+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:05:51.330902+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,3],"acting":[7,0,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":7,"up_from":26,"seq":111669149699,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27560,"kb_used_data":716,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939864,"statfs":{"total":21470642176,"available":21442420736,"internally_reserved":0,"allocated":733184,"data_stored":562459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":25,"seq":107374182404,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27112,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940312,"statfs":{"total":21470642176,"available":21442879488,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":23,"seq":98784247812,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27108,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940316,"statfs":{"total":21470642176,"available":21442883584,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":23,"seq":98784247812,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27104,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940320,"statfs":{"total":21470642176,"available":21442887680,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":20,"seq":85899345925,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27552,"kb_used_data":716,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939872,"statfs":{"total":21470642176,"available":21442428928,"internally_reserved":0,"allocated":733184,"data_stored":562459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":19,"seq":81604378629,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27108,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940316,"statfs":{"total":21470642176,"available":21442883584,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":17,"seq":73014444037,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27104,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940320,"statfs":{"total":21470642176,"available":21442887680,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":17,"seq":73014444038,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27560,"kb_used_data":716,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939864,"statfs":{"total":21470642176,"available":21442420736,"internally_reserved":0,"allocated":733184,"data_stored":562459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T14:36:24.363 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph pg dump --format=json 2026-03-10T14:36:24.555 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:24.788 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:24.788 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T14:36:24.837 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:24 vm00 ceph-mon[47192]: from='client.14540 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:24.837 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:24 vm00 ceph-mon[47192]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:24.860 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":50,"stamp":"2026-03-10T14:36:23.324322+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":4,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218208,"kb_used_data":3468,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521184,"statfs":{"total":171765137408,"available":171541692416,"internally_reserved":0,"allocated":3551232,"data_stored":2203272,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"10.001511"},"pg_stats":[{"pgid":"1.0","version":"22'32","reported_seq":17,"reported_epoch":28,"state":"active+clean","last_fresh":"2026-03-10T14:36:14.220271+0000","last_change":"2026-03-10T14:36:14.220271+0000","last_active":"2026-03-10T14:36:14.220271+0000","last_peered":"2026-03-10T14:36:14.220271+0000","last_clean":"2026-03-10T14:36:14.220271+0000","last_became_active":"2026-03-10T14:36:14.214237+0000","last_became_peered":"2026-03-10T14:36:14.214237+0000","last_unstale":"2026-03-10T14:36:14.220271+0000","last_undegraded":"2026-03-10T14:36:14.220271+0000","last_fullsized":"2026-03-10T14:36:14.220271+0000","mapping_epoch":27,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":28,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T14:36:07.574781+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T14:36:07.574781+0000","last_clean_scrub_stamp":"2026-03-10T14:36:07.574781+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:05:51.330902+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,3],"acting":[7,0,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":7,"up_from":26,"seq":111669149699,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27560,"kb_used_data":716,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939864,"statfs":{"total":21470642176,"available":21442420736,"internally_reserved":0,"allocated":733184,"data_stored":562459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":25,"seq":107374182404,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27112,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940312,"statfs":{"total":21470642176,"available":21442879488,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":23,"seq":98784247812,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27108,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940316,"statfs":{"total":21470642176,"available":21442883584,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":23,"seq":98784247812,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27104,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940320,"statfs":{"total":21470642176,"available":21442887680,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":20,"seq":85899345925,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27552,"kb_used_data":716,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939872,"statfs":{"total":21470642176,"available":21442428928,"internally_reserved":0,"allocated":733184,"data_stored":562459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":19,"seq":81604378629,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27108,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940316,"statfs":{"total":21470642176,"available":21442883584,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":17,"seq":73014444037,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27104,"kb_used_data":264,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940320,"statfs":{"total":21470642176,"available":21442887680,"internally_reserved":0,"allocated":270336,"data_stored":103179,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":17,"seq":73014444038,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27560,"kb_used_data":716,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939864,"statfs":{"total":21470642176,"available":21442420736,"internally_reserved":0,"allocated":733184,"data_stored":562459,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T14:36:24.860 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T14:36:24.860 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T14:36:24.860 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T14:36:24.861 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph health --format=json 2026-03-10T14:36:25.048 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:25.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:24 vm03 ceph-mon[54091]: from='client.14540 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:25.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:24 vm03 ceph-mon[54091]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:25.292 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:25.292 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T14:36:25.355 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T14:36:25.355 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T14:36:25.355 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T14:36:25.357 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:25.357 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch status' 2026-03-10T14:36:25.540 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:25.788 INFO:teuthology.orchestra.run.vm00.stdout:Backend: cephadm 2026-03-10T14:36:25.788 INFO:teuthology.orchestra.run.vm00.stdout:Available: Yes 2026-03-10T14:36:25.788 INFO:teuthology.orchestra.run.vm00.stdout:Paused: No 2026-03-10T14:36:25.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:25 vm03 ceph-mon[54091]: from='client.? 192.168.123.100:0/3737948064' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:36:25.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:25 vm03 ceph-mon[54091]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T14:36:25.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:25 vm03 ceph-mon[54091]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:25.980 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch ps' 2026-03-10T14:36:26.169 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:26.195 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:25 vm00 ceph-mon[47192]: from='client.? 192.168.123.100:0/3737948064' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T14:36:26.195 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:25 vm00 ceph-mon[47192]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T14:36:26.195 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:25 vm00 ceph-mon[47192]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:26.477 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.vm00 vm00 *:9093,9094 running (51s) 15s ago 90s 24.7M - 0.25.0 c8568f914cd2 194c498010dc 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm00 vm00 *:9926 running (98s) 15s ago 98s 8623k - 19.2.3-678-ge911bdeb 654f31e6858e 58262c4cdbf4 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm03 vm03 *:9926 running (65s) 16s ago 65s 6631k - 19.2.3-678-ge911bdeb 654f31e6858e 35aff906f0fb 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm00 vm00 running (97s) 15s ago 97s 7612k - 19.2.3-678-ge911bdeb 654f31e6858e 126f45bac52f 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm03 vm03 running (64s) 16s ago 64s 7612k - 19.2.3-678-ge911bdeb 654f31e6858e c1af3996c3a6 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:grafana.vm00 vm00 *:3000 running (50s) 15s ago 83s 78.4M - 10.4.0 c8b91775d855 8e4da2fe1e85 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm00.qkhroe vm00 *:9283,8765,8443 running (2m) 15s ago 2m 546M - 19.2.3-678-ge911bdeb 654f31e6858e 4bf3d3f512f8 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm03.iylznd vm03 *:8443,9283,8765 running (61s) 16s ago 61s 487M - 19.2.3-678-ge911bdeb 654f31e6858e 00d21181346d 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm00 vm00 running (2m) 15s ago 2m 48.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 6d040919b8d4 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm03 vm03 running (60s) 16s ago 59s 42.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d2ba0bf1bcdc 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm00 vm00 *:9100 running (94s) 15s ago 94s 9323k - 1.7.0 72c9c2088986 7d7f17f632f2 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm03 vm03 *:9100 running (62s) 16s ago 62s 9126k - 1.7.0 72c9c2088986 4a4afc29b40f 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm03 running (28s) 16s ago 28s 54.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4ea9782dde33 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (27s) 15s ago 26s 38.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9b0f677e160e 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm03 running (25s) 16s ago 25s 32.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e ea0fb6ebf411 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (24s) 15s ago 24s 33.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 34334b9fd9c6 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm03 running (22s) 16s ago 22s 58.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b3e8a0b24f79 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm00 running (21s) 15s ago 21s 57.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2b081e8e6d9e 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm03 running (20s) 16s ago 20s 28.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 091d86cfc467 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm00 running (18s) 15s ago 18s 19.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b94d36b5badf 2026-03-10T14:36:26.478 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.vm00 vm00 *:9095 running (49s) 15s ago 75s 35.6M - 2.51.0 1d3b7f56885b 62902708a564 2026-03-10T14:36:26.537 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch ls' 2026-03-10T14:36:26.723 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 16s ago 111s count:1 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter ?:9926 2/2 16s ago 112s * 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:crash 2/2 16s ago 112s * 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 16s ago 111s count:1 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 16s ago 113s count:2 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:mon 2/2 16s ago 96s vm00:192.168.123.100=vm00;vm03:192.168.123.103=vm03;count:2 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 2/2 16s ago 111s * 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:osd.all-available-devices 8 16s ago 50s * 2026-03-10T14:36:26.967 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 16s ago 112s count:1 2026-03-10T14:36:27.004 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:26 vm00 ceph-mon[47192]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:27.041 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch host ls' 2026-03-10T14:36:27.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:26 vm03 ceph-mon[54091]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:27.219 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:27.444 INFO:teuthology.orchestra.run.vm00.stdout:HOST ADDR LABELS STATUS 2026-03-10T14:36:27.444 INFO:teuthology.orchestra.run.vm00.stdout:vm00 192.168.123.100 2026-03-10T14:36:27.444 INFO:teuthology.orchestra.run.vm00.stdout:vm03 192.168.123.103 2026-03-10T14:36:27.444 INFO:teuthology.orchestra.run.vm00.stdout:2 hosts in cluster 2026-03-10T14:36:27.500 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch device ls' 2026-03-10T14:36:27.688 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 15s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdb hdd DWNBRSTVMM00001 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdc hdd DWNBRSTVMM00002 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdd hdd DWNBRSTVMM00003 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vde hdd DWNBRSTVMM00004 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 16s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdb hdd DWNBRSTVMM03001 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdc hdd DWNBRSTVMM03002 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdd hdd DWNBRSTVMM03003 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.933 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vde hdd DWNBRSTVMM03004 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:27.992 INFO:teuthology.run_tasks:Running task vip... 2026-03-10T14:36:27.995 INFO:tasks.vip:Allocating static IPs for each host... 2026-03-10T14:36:27.995 INFO:tasks.vip:peername 192.168.123.100 2026-03-10T14:36:27.995 INFO:tasks.vip:192.168.123.100 in 192.168.123.0/24, pos 99 2026-03-10T14:36:27.995 INFO:tasks.vip:vm00.local static 12.12.0.100, vnet 12.12.0.0/22 2026-03-10T14:36:27.995 INFO:tasks.vip:VIPs are [IPv4Address('12.12.1.100')] 2026-03-10T14:36:27.995 DEBUG:teuthology.orchestra.run.vm00:> sudo ip route ls 2026-03-10T14:36:28.023 INFO:teuthology.orchestra.run.vm00.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.100 metric 100 2026-03-10T14:36:28.023 INFO:teuthology.orchestra.run.vm00.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T14:36:28.025 INFO:tasks.vip:Configuring 12.12.0.100 on vm00.local iface eth0... 2026-03-10T14:36:28.025 DEBUG:teuthology.orchestra.run.vm00:> sudo ip addr add 12.12.0.100/22 dev eth0 2026-03-10T14:36:28.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:27 vm00 ceph-mon[47192]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:28.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:27 vm00 ceph-mon[47192]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:36:28.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:27 vm00 ceph-mon[47192]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:28.089 INFO:tasks.vip:peername 192.168.123.103 2026-03-10T14:36:28.090 INFO:tasks.vip:192.168.123.103 in 192.168.123.0/24, pos 102 2026-03-10T14:36:28.090 INFO:tasks.vip:vm03.local static 12.12.0.103, vnet 12.12.0.0/22 2026-03-10T14:36:28.090 DEBUG:teuthology.orchestra.run.vm03:> sudo ip route ls 2026-03-10T14:36:28.118 INFO:teuthology.orchestra.run.vm03.stdout:default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.103 metric 100 2026-03-10T14:36:28.118 INFO:teuthology.orchestra.run.vm03.stdout:192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.103 metric 100 2026-03-10T14:36:28.120 INFO:tasks.vip:Configuring 12.12.0.103 on vm03.local iface eth0... 2026-03-10T14:36:28.120 DEBUG:teuthology.orchestra.run.vm03:> sudo ip addr add 12.12.0.103/22 dev eth0 2026-03-10T14:36:28.186 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T14:36:28.195 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:28.196 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch device ls --refresh' 2026-03-10T14:36:28.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:27 vm03 ceph-mon[54091]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:28.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:27 vm03 ceph-mon[54091]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:36:28.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:27 vm03 ceph-mon[54091]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:28.409 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 16s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdb hdd DWNBRSTVMM00001 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdc hdd DWNBRSTVMM00002 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdd hdd DWNBRSTVMM00003 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vde hdd DWNBRSTVMM00004 20.0G No 16s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 17s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdb hdd DWNBRSTVMM03001 20.0G No 17s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdc hdd DWNBRSTVMM03002 20.0G No 17s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdd hdd DWNBRSTVMM03003 20.0G No 17s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:28.688 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vde hdd DWNBRSTVMM03004 20.0G No 17s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:29.214 INFO:teuthology.run_tasks:Running task vip.exec... 2026-03-10T14:36:29.234 INFO:tasks.vip:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:29.234 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'systemctl stop nfs-server' 2026-03-10T14:36:29.260 INFO:teuthology.orchestra.run.vm00.stderr:+ systemctl stop nfs-server 2026-03-10T14:36:29.270 INFO:tasks.vip:Running commands on role host.b host ubuntu@vm03.local 2026-03-10T14:36:29.270 DEBUG:teuthology.orchestra.run.vm03:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'systemctl stop nfs-server' 2026-03-10T14:36:29.298 INFO:teuthology.orchestra.run.vm03.stderr:+ systemctl stop nfs-server 2026-03-10T14:36:29.305 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:29 vm03.local ceph-mon[54091]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:29.305 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:29 vm03.local ceph-mon[54091]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:29.305 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:29 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:29.305 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T14:36:29.308 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:29.308 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph fs volume create foofs' 2026-03-10T14:36:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:29 vm00.local ceph-mon[47192]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:29 vm00.local ceph-mon[47192]: from='client.14572 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:29.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:29 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:30.073 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:30.161 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:30 vm03.local ceph-mon[54091]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:36:30.459 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:30 vm00.local ceph-mon[47192]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-10T14:36:31.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:31 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:31.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:31 vm00.local ceph-mon[47192]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "foofs", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:31.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:31 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]: dispatch 2026-03-10T14:36:31.561 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:31 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:31.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:31 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:31.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:31 vm03.local ceph-mon[54091]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "foofs", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:31.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:31 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]: dispatch 2026-03-10T14:36:31.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:31 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:32.810 INFO:teuthology.run_tasks:Running task cephadm.apply... 2026-03-10T14:36:32.814 INFO:tasks.cephadm:Applying spec(s): placement: count: 1 service_id: foo service_type: nfs spec: port: 2049 virtual_ip: 12.12.1.100 --- placement: count: 1 service_id: nfs.foo service_type: ingress spec: backend_service: nfs.foo keepalive_only: true monitor_port: 9002 virtual_ip: 12.12.1.100/22 2026-03-10T14:36:32.814 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch apply -i - 2026-03-10T14:36:32.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]': finished 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: osdmap e29: 8 total, 8 up, 8 in 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]: dispatch 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: pgmap v55: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:32.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:32 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.030 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.030 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool create", "pool": "cephfs.foofs.meta"}]': finished 2026-03-10T14:36:33.030 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: osdmap e29: 8 total, 8 up, 8 in 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]: dispatch 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: pgmap v55: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.031 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:32 vm00.local ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00[47188]: 2026-03-10T14:36:32.664+0000 7f3884b07640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T14:36:33.055 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:33.444 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled nfs.foo update... 2026-03-10T14:36:33.445 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled ingress.nfs.foo update... 2026-03-10T14:36:33.503 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-10T14:36:33.505 INFO:tasks.cephadm:Waiting for ceph service nfs.foo to start (timeout 300)... 2026-03-10T14:36:33.506 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]': finished 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: osdmap e30: 8 total, 8 up, 8 in 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]: dispatch 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]': finished 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: osdmap e31: 8 total, 8 up, 8 in 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: fsmap foofs:0 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: Saving service mds.foofs spec with placement count:2 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:33 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.797 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.foofs.data"}]': finished 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: osdmap e30: 8 total, 8 up, 8 in 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]: dispatch 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN) 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX) 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "fs new", "fs_name": "foofs", "metadata": "cephfs.foofs.meta", "data": "cephfs.foofs.data"}]': finished 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: osdmap e31: 8 total, 8 up, 8 in 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: fsmap foofs:0 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: Saving service mds.foofs spec with placement count:2 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:33.966 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:33 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:34.117 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:34.117 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:33.441042Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:32.741057Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:33.434584Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:34.641 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: pgmap v58: 65 pgs: 13 creating+peering, 51 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='client.14580 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: Saving service nfs.foo spec with placement count:1 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: Saving service ingress.nfs.foo spec with placement count:1 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: osdmap e32: 8 total, 8 up, 8 in 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:35.178 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:35.179 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:35.179 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:35.179 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm03.oldwcz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T14:36:35.179 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm03.oldwcz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T14:36:35.179 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:35 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: pgmap v58: 65 pgs: 13 creating+peering, 51 unknown, 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='client.14580 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: Saving service nfs.foo spec with placement count:1 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: Saving service ingress.nfs.foo spec with placement count:1 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: osdmap e32: 8 total, 8 up, 8 in 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm03.oldwcz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm03.oldwcz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T14:36:35.310 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:35 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:35.641 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:35.851 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:36.189 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:36.190 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:33.441042Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:35.688744Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:33.434584Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: Deploying daemon mds.foofs.vm03.oldwcz on vm03 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: osdmap e33: 8 total, 8 up, 8 in 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: pgmap v61: 65 pgs: 19 active+clean, 13 creating+peering, 33 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm00.icqynv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm00.icqynv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:36.215 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:36 vm00.local ceph-mon[47192]: Deploying daemon mds.foofs.vm00.icqynv on vm00 2026-03-10T14:36:36.287 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: Deploying daemon mds.foofs.vm03.oldwcz on vm03 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='client.14584 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: osdmap e33: 8 total, 8 up, 8 in 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: pgmap v61: 65 pgs: 19 active+clean, 13 creating+peering, 33 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm00.icqynv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "mds.foofs.vm00.icqynv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:36.465 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:36 vm03.local ceph-mon[54091]: Deploying daemon mds.foofs.vm00.icqynv on vm00 2026-03-10T14:36:37.287 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: mds.? [v2:192.168.123.103:6832/3815203638,v1:192.168.123.103:6833/3815203638] up:boot 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: daemon mds.foofs.vm03.oldwcz assigned to filesystem foofs as rank 0 (now has 1 ranks) 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: Cluster is now healthy 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: fsmap foofs:0 1 up:standby 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata", "who": "foofs.vm03.oldwcz"}]: dispatch 2026-03-10T14:36:37.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:37 vm00.local ceph-mon[47192]: fsmap foofs:1 {0=foofs.vm03.oldwcz=up:creating} 2026-03-10T14:36:37.504 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: mds.? [v2:192.168.123.103:6832/3815203638,v1:192.168.123.103:6833/3815203638] up:boot 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: daemon mds.foofs.vm03.oldwcz assigned to filesystem foofs as rank 0 (now has 1 ranks) 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline) 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds) 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: Cluster is now healthy 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: fsmap foofs:0 1 up:standby 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata", "who": "foofs.vm03.oldwcz"}]: dispatch 2026-03-10T14:36:37.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:37 vm03.local ceph-mon[54091]: fsmap foofs:1 {0=foofs.vm03.oldwcz=up:creating} 2026-03-10T14:36:38.208 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:38.208 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:33.441042Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:37.311975Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:38.401 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: daemon mds.foofs.vm03.oldwcz is now active in filesystem foofs as rank 0 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: Creating key for client.nfs.foo.0.0.vm00.ilvdin 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.ilvdin", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.ilvdin", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: pgmap v62: 65 pgs: 44 active+clean, 13 creating+peering, 8 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Can't connect to cluster: -1 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Can't connect to cluster: -1 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: pgmap v63: 65 pgs: 44 active+clean, 13 creating+peering, 8 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: mds.? [v2:192.168.123.100:6834/1124331592,v1:192.168.123.100:6835/1124331592] up:boot 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: mds.? [v2:192.168.123.103:6832/3815203638,v1:192.168.123.103:6833/3815203638] up:active 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: fsmap foofs:1 {0=foofs.vm03.oldwcz=up:active} 1 up:standby 2026-03-10T14:36:38.402 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:38 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata", "who": "foofs.vm00.icqynv"}]: dispatch 2026-03-10T14:36:38.428 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:38.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: daemon mds.foofs.vm03.oldwcz is now active in filesystem foofs as rank 0 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: Creating key for client.nfs.foo.0.0.vm00.ilvdin 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.ilvdin", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.ilvdin", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: pgmap v62: 65 pgs: 44 active+clean, 13 creating+peering, 8 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: ganesha-rados-grace tool failed: rados_pool_create: -1 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Can't connect to cluster: -1 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout: Can't connect to cluster: -1 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: pgmap v63: 65 pgs: 44 active+clean, 13 creating+peering, 8 unknown; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: mds.? [v2:192.168.123.100:6834/1124331592,v1:192.168.123.100:6835/1124331592] up:boot 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: mds.? [v2:192.168.123.103:6832/3815203638,v1:192.168.123.103:6833/3815203638] up:active 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: fsmap foofs:1 {0=foofs.vm03.oldwcz=up:active} 1 up:standby 2026-03-10T14:36:38.716 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:38 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "mds metadata", "who": "foofs.vm00.icqynv"}]: dispatch 2026-03-10T14:36:39.429 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:39.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:39 vm00.local ceph-mon[47192]: from='client.24377 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:39.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:39 vm00.local ceph-mon[47192]: Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL) 2026-03-10T14:36:39.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:39 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:36:39.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:39 vm00.local ceph-mon[47192]: osdmap e34: 8 total, 8 up, 8 in 2026-03-10T14:36:39.560 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:39 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T14:36:39.612 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:39.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:39 vm03.local ceph-mon[54091]: from='client.24377 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:39.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:39 vm03.local ceph-mon[54091]: Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL) 2026-03-10T14:36:39.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:39 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T14:36:39.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:39 vm03.local ceph-mon[54091]: osdmap e34: 8 total, 8 up, 8 in 2026-03-10T14:36:39.715 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:39 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T14:36:39.871 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:39.871 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:33.441042Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:39.691636Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:39.925 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:40.925 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: pgmap v65: 97 pgs: 32 unknown, 65 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 709 B/s wr, 3 op/s 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: osdmap e35: 8 total, 8 up, 8 in 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: 12.12.1.100 is in 12.12.0.0/22 on vm03 interface eth0 2026-03-10T14:36:40.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: Deploying daemon keepalived.nfs.foo.vm03.kmttbu on vm03 2026-03-10T14:36:40.948 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:40 vm00.local ceph-mon[47192]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: pgmap v65: 97 pgs: 32 unknown, 65 active+clean; 450 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 709 B/s wr, 3 op/s 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: osdmap e35: 8 total, 8 up, 8 in 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: 12.12.1.100 is in 12.12.0.0/22 on vm03 interface eth0 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: Deploying daemon keepalived.nfs.foo.vm03.kmttbu on vm03 2026-03-10T14:36:40.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:40 vm03.local ceph-mon[54091]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-10T14:36:41.109 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:41.340 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:41.340 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:33.441042Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:39.691636Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:41.532 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:41.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:41 vm00.local ceph-mon[47192]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:41.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:41 vm00.local ceph-mon[47192]: osdmap e36: 8 total, 8 up, 8 in 2026-03-10T14:36:41.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:41 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:41.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:41 vm03.local ceph-mon[54091]: from='client.14606 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:41.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:41 vm03.local ceph-mon[54091]: osdmap e36: 8 total, 8 up, 8 in 2026-03-10T14:36:41.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:41 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:42.532 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:42.724 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:42.808 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:42 vm00.local ceph-mon[47192]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:42.808 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:42 vm00.local ceph-mon[47192]: pgmap v68: 97 pgs: 11 creating+peering, 12 unknown, 74 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.4 KiB/s wr, 10 op/s 2026-03-10T14:36:42.808 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:42 vm00.local ceph-mon[47192]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T14:36:42.808 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:42 vm00.local ceph-mon[47192]: mds.? [v2:192.168.123.100:6834/1124331592,v1:192.168.123.100:6835/1124331592] up:standby 2026-03-10T14:36:42.808 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:42 vm00.local ceph-mon[47192]: mds.? [v2:192.168.123.103:6832/3815203638,v1:192.168.123.103:6833/3815203638] up:active 2026-03-10T14:36:42.808 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:42 vm00.local ceph-mon[47192]: fsmap foofs:1 {0=foofs.vm03.oldwcz=up:active} 1 up:standby 2026-03-10T14:36:42.959 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:42.959 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:33.441042Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:39.691636Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:42.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:42 vm03.local ceph-mon[54091]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:42.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:42 vm03.local ceph-mon[54091]: pgmap v68: 97 pgs: 11 creating+peering, 12 unknown, 74 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 3.4 KiB/s wr, 10 op/s 2026-03-10T14:36:42.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:42 vm03.local ceph-mon[54091]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-10T14:36:42.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:42 vm03.local ceph-mon[54091]: mds.? [v2:192.168.123.100:6834/1124331592,v1:192.168.123.100:6835/1124331592] up:standby 2026-03-10T14:36:42.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:42 vm03.local ceph-mon[54091]: mds.? [v2:192.168.123.103:6832/3815203638,v1:192.168.123.103:6833/3815203638] up:active 2026-03-10T14:36:42.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:42 vm03.local ceph-mon[54091]: fsmap foofs:1 {0=foofs.vm03.oldwcz=up:active} 1 up:standby 2026-03-10T14:36:43.026 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:44.027 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:44.207 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:44.463 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:44.464 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:32.209862Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:30.277052Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:30.277118Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:32.209904Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:44.324811Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "ports": [9002], "running": 0, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "running": 0, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:30.277196Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:30.277226Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:39.691636Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:30.277164Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:30.277256Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:32.209944Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:44.531 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:44.946 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:44.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: pgmap v69: 97 pgs: 11 creating+peering, 1 unknown, 85 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s wr, 7 op/s 2026-03-10T14:36:44.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.947 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:44 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: from='client.24383 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: pgmap v69: 97 pgs: 11 creating+peering, 1 unknown, 85 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 2.3 KiB/s wr, 7 op/s 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:44.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:44 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:45.532 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:45.695 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:45 vm03.local ceph-mon[54091]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:45.695 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:45 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:45.695 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:45 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:45.695 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:45 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:45.695 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:45 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:45.749 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:45.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:45 vm00.local ceph-mon[47192]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:45.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:45 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:45.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:45 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:45.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:45 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:45.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:45 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T14:36:46.004 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:46.004 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:45.708971Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:45.344216Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:45.344265Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:45.708998Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:44.324811Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "last_refresh": "2026-03-10T14:36:45.344955Z", "ports": [9002], "running": 1, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "last_refresh": "2026-03-10T14:36:45.344928Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:45.344760Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:45.344790Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:45.744437Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:45.344722Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:45.344818Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:45.709025Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:46.067 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: pgmap v70: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s wr, 3 op/s 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: pgmap v71: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s wr, 4 op/s 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: Fencing old nfs.foo.0.0.vm00.ilvdin 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm00.ilvdin"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm00.ilvdin"}]': finished 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: Creating key for client.nfs.foo.0.1.vm03.bxbqms 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 1 daemon(s)) 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: Cluster is now healthy 2026-03-10T14:36:46.843 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:46 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: pgmap v70: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s wr, 3 op/s 2026-03-10T14:36:47.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:47.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:47.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: pgmap v71: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s wr, 4 op/s 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: Fencing old nfs.foo.0.0.vm00.ilvdin 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm00.ilvdin"}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth rm", "entity": "client.nfs.foo.0.0.vm00.ilvdin"}]': finished 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: Creating key for client.nfs.foo.0.1.vm03.bxbqms 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.1.vm03.bxbqms-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 1 daemon(s)) 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: Cluster is now healthy 2026-03-10T14:36:47.061 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:46 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.068 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:47.268 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:47.529 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:47.529 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:45.708971Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:45.344216Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:45.344265Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:45.708998Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:44.324811Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "last_refresh": "2026-03-10T14:36:45.344955Z", "ports": [9002], "running": 1, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "last_refresh": "2026-03-10T14:36:45.344928Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:45.344760Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:45.344790Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:46.939972Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:45.344722Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:45.344818Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:45.709025Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:47.606 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: Creating rados config object: conf-nfs.foo 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: Creating key for client.nfs.foo.0.1.vm03.bxbqms-rgw 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: Deploying daemon nfs.foo.0.1.vm03.bxbqms on vm03 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.878 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:47 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: Creating rados config object: conf-nfs.foo 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: Creating key for client.nfs.foo.0.1.vm03.bxbqms-rgw 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: Deploying daemon nfs.foo.0.1.vm03.bxbqms on vm03 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: from='client.14632 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:47.980 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:47 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:48.607 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:48.983 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='client.14650 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: pgmap v72: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:49.051 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.052 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:49.052 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:48 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='client.14650 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: pgmap v72: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:49.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:48 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.260 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:49.260 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:48.317236Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:48.068134Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:48.068191Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:48.317388Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:44.324811Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "last_refresh": "2026-03-10T14:36:48.068481Z", "ports": [9002], "running": 1, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "last_refresh": "2026-03-10T14:36:48.068451Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:48.068260Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:48.068291Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:48.337852Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "last_refresh": "2026-03-10T14:36:48.068513Z", "ports": [2049], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:48.068227Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:48.068322Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "last_refresh": "2026-03-10T14:36:48.317435Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T14:36:49.326 INFO:tasks.cephadm:nfs.foo has 1/1 2026-03-10T14:36:49.326 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-10T14:36:49.329 INFO:tasks.cephadm:Waiting for ceph service ingress.nfs.foo to start (timeout 300)... 2026-03-10T14:36:49.329 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph orch ls -f json 2026-03-10T14:36:49.566 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:49.841 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T14:36:49.841 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T14:34:35.817204Z", "last_refresh": "2026-03-10T14:36:48.317236Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:20.725082Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T14:34:34.392184Z", "last_refresh": "2026-03-10T14:36:48.068134Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:21.761516Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T14:34:34.136921Z", "last_refresh": "2026-03-10T14:36:48.068191Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T14:34:35.018777Z", "last_refresh": "2026-03-10T14:36:48.317388Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:36:44.324811Z service:ingress.nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "nfs.foo", "service_name": "ingress.nfs.foo", "service_type": "ingress", "spec": {"backend_service": "nfs.foo", "first_virtual_router_id": 50, "keepalive_only": true, "monitor_port": 9002, "virtual_ip": "12.12.1.100/22"}, "status": {"created": "2026-03-10T14:36:33.435032Z", "last_refresh": "2026-03-10T14:36:48.068481Z", "ports": [9002], "running": 1, "size": 1, "virtual_ip": "12.12.1.100/22"}}, {"events": ["2026-03-10T14:36:37.307894Z service:mds.foofs [INFO] \"service was created\""], "placement": {"count": 2}, "service_id": "foofs", "service_name": "mds.foofs", "service_type": "mds", "status": {"created": "2026-03-10T14:36:32.718149Z", "last_refresh": "2026-03-10T14:36:48.068451Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:25.365085Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T14:34:33.861439Z", "last_refresh": "2026-03-10T14:36:48.068260Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:26.556927Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm03:192.168.123.103=vm03"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T14:34:50.716386Z", "last_refresh": "2026-03-10T14:36:48.068291Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T14:36:48.337852Z service:nfs.foo [INFO] \"service was created\"", "2026-03-10T14:36:37.476640Z service:nfs.foo [ERROR] \"Failed while placing nfs.foo.0.0.vm00.ilvdin on vm00: grace tool failed: rados_pool_create: -1\nCan't connect to cluster: -1\n\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049, "virtual_ip": "12.12.1.100"}, "status": {"created": "2026-03-10T14:36:33.421291Z", "last_refresh": "2026-03-10T14:36:48.068513Z", "ports": [2049], "running": 1, "size": 1}}, {"events": ["2026-03-10T14:35:24.429764Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T14:34:35.357338Z", "last_refresh": "2026-03-10T14:36:48.068227Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T14:35:36.270515Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T14:35:36.264627Z", "last_refresh": "2026-03-10T14:36:48.068322Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T14:35:26.562838Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T14:34:34.681647Z", "ports": [9095], "running": 0, "size": 1}}] 2026-03-10T14:36:49.854 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: Reconfiguring prometheus.vm00 (dependencies changed)... 2026-03-10T14:36:49.854 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: Reconfiguring daemon prometheus.vm00 on vm00 2026-03-10T14:36:49.854 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:49.854 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:49.938 INFO:tasks.cephadm:ingress.nfs.foo has 1/1 2026-03-10T14:36:49.938 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T14:36:49.941 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:49.941 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph nfs export create cephfs --fsname foofs --cluster-id foo --pseudo-path /fake' 2026-03-10T14:36:50.175 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:50.213 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:50.213 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:36:50.213 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:36:50.213 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:50.213 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:49 vm00.local ceph-mon[47192]: pgmap v73: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 902 B/s wr, 2 op/s 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: Reconfiguring prometheus.vm00 (dependencies changed)... 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: Reconfiguring daemon prometheus.vm00 on vm00 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T14:36:50.216 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:49 vm03.local ceph-mon[54091]: pgmap v73: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 902 B/s wr, 2 op/s 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout: "bind": "/fake", 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout: "cluster": "foo", 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout: "fs": "foofs", 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout: "mode": "RW", 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout: "path": "/" 2026-03-10T14:36:50.567 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T14:36:50.636 INFO:teuthology.run_tasks:Running task vip.exec... 2026-03-10T14:36:50.638 INFO:tasks.vip:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:50.638 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mkdir /mnt/foo' 2026-03-10T14:36:50.667 INFO:teuthology.orchestra.run.vm00.stderr:+ mkdir /mnt/foo 2026-03-10T14:36:50.670 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'sleep 5' 2026-03-10T14:36:50.759 INFO:teuthology.orchestra.run.vm00.stderr:+ sleep 5 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='client.14658 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='client.24415 -' entity='client.admin' cmd=[{"prefix": "nfs export create cephfs", "fsname": "foofs", "cluster_id": "foo", "pseudo_path": "/fake", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]': finished 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:51.810 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:51 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='client.14658 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='client.24415 -' entity='client.admin' cmd=[{"prefix": "nfs export create cephfs", "fsname": "foofs", "cluster_id": "foo", "pseudo_path": "/fake", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]': finished 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.foofs.94ac2614", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo, allow rw tag cephfs data=foofs", "mds", "allow rw path=/"], "format": "json"}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T14:36:51.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:51 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:52.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:52 vm03.local ceph-mon[54091]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:52.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:52 vm03.local ceph-mon[54091]: pgmap v74: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 999 B/s rd, 799 B/s wr, 1 op/s 2026-03-10T14:36:52.965 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:52 vm03.local ceph-mon[54091]: mgrmap e20: vm00.qkhroe(active, since 96s), standbys: vm03.iylznd 2026-03-10T14:36:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:52 vm00.local ceph-mon[47192]: from='mgr.14221 192.168.123.100:0/1438790913' entity='mgr.vm00.qkhroe' 2026-03-10T14:36:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:52 vm00.local ceph-mon[47192]: pgmap v74: 97 pgs: 97 active+clean; 451 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 999 B/s rd, 799 B/s wr, 1 op/s 2026-03-10T14:36:53.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:52 vm00.local ceph-mon[47192]: mgrmap e20: vm00.qkhroe(active, since 96s), standbys: vm03.iylznd 2026-03-10T14:36:55.060 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:54 vm00.local ceph-mon[47192]: pgmap v75: 97 pgs: 97 active+clean; 453 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-10T14:36:55.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:54 vm03.local ceph-mon[54091]: pgmap v75: 97 pgs: 97 active+clean; 453 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1.4 KiB/s wr, 2 op/s 2026-03-10T14:36:55.761 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'mount -t nfs 12.12.1.100:/fake /mnt/foo' 2026-03-10T14:36:55.786 INFO:teuthology.orchestra.run.vm00.stderr:+ mount -t nfs 12.12.1.100:/fake /mnt/foo 2026-03-10T14:36:55.991 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'echo test > /mnt/foo/testfile' 2026-03-10T14:36:56.059 INFO:teuthology.orchestra.run.vm00.stderr:+ echo test 2026-03-10T14:36:56.078 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c sync 2026-03-10T14:36:56.146 INFO:teuthology.orchestra.run.vm00.stderr:+ sync 2026-03-10T14:36:56.422 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T14:36:56.425 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T14:36:56.425 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'stat -c '"'"'%u %g'"'"' /var/log/ceph | grep '"'"'167 167'"'"'' 2026-03-10T14:36:56.650 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:56.733 INFO:teuthology.orchestra.run.vm00.stdout:167 167 2026-03-10T14:36:56.775 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch status' 2026-03-10T14:36:56.957 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:56.981 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:56 vm00.local ceph-mon[47192]: pgmap v76: 97 pgs: 97 active+clean; 453 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T14:36:57.203 INFO:teuthology.orchestra.run.vm00.stdout:Backend: cephadm 2026-03-10T14:36:57.203 INFO:teuthology.orchestra.run.vm00.stdout:Available: Yes 2026-03-10T14:36:57.204 INFO:teuthology.orchestra.run.vm00.stdout:Paused: No 2026-03-10T14:36:57.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:56 vm03.local ceph-mon[54091]: pgmap v76: 97 pgs: 97 active+clean; 453 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T14:36:57.270 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch ps' 2026-03-10T14:36:57.462 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.vm00 vm00 *:9093,9094 running (83s) 6s ago 2m 24.7M - 0.25.0 c8568f914cd2 194c498010dc 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm00 vm00 *:9926 running (2m) 6s ago 2m 9227k - 19.2.3-678-ge911bdeb 654f31e6858e 58262c4cdbf4 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm03 vm03 *:9926 running (97s) 7s ago 97s 6631k - 19.2.3-678-ge911bdeb 654f31e6858e 35aff906f0fb 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm00 vm00 running (2m) 6s ago 2m 7612k - 19.2.3-678-ge911bdeb 654f31e6858e 126f45bac52f 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm03 vm03 running (96s) 7s ago 95s 7612k - 19.2.3-678-ge911bdeb 654f31e6858e c1af3996c3a6 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:grafana.vm00 vm00 *:3000 running (82s) 6s ago 114s 74.8M - 10.4.0 c8b91775d855 8e4da2fe1e85 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:keepalived.nfs.foo.vm03.kmttbu vm03 *:9002 running (13s) 7s ago 13s 4568k - 2.2.4 4a3a1ff181d9 5817a43e1854 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:mds.foofs.vm00.icqynv vm00 running (20s) 6s ago 20s 16.8M - 19.2.3-678-ge911bdeb 654f31e6858e fb529ac36de8 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:mds.foofs.vm03.oldwcz vm03 running (22s) 7s ago 22s 17.9M - 19.2.3-678-ge911bdeb 654f31e6858e 1ae4200848ef 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm00.qkhroe vm00 *:9283,8765,8443 running (2m) 6s ago 2m 558M - 19.2.3-678-ge911bdeb 654f31e6858e 4bf3d3f512f8 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm03.iylznd vm03 *:8443,9283,8765 running (92s) 7s ago 92s 488M - 19.2.3-678-ge911bdeb 654f31e6858e 00d21181346d 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm00 vm00 running (2m) 6s ago 2m 53.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 6d040919b8d4 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm03 vm03 running (91s) 7s ago 91s 47.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d2ba0bf1bcdc 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:nfs.foo.0.1.vm03.bxbqms vm03 *:2049 running (10s) 7s ago 10s 13.6M - 5.9 654f31e6858e 5003fce82a0c 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm00 vm00 *:9100 running (2m) 6s ago 2m 9475k - 1.7.0 72c9c2088986 7d7f17f632f2 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm03 vm03 *:9100 running (93s) 7s ago 93s 9391k - 1.7.0 72c9c2088986 4a4afc29b40f 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm03 running (59s) 7s ago 59s 63.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4ea9782dde33 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (58s) 6s ago 58s 43.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 9b0f677e160e 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm03 running (56s) 7s ago 56s 42.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e ea0fb6ebf411 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm00 running (55s) 6s ago 55s 45.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 34334b9fd9c6 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm03 running (54s) 7s ago 54s 64.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b3e8a0b24f79 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm00 running (53s) 6s ago 52s 62.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2b081e8e6d9e 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm03 running (51s) 7s ago 51s 67.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 091d86cfc467 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm00 running (50s) 6s ago 50s 64.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e b94d36b5badf 2026-03-10T14:36:57.700 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.vm00 vm00 *:9095 running (8s) 6s ago 107s 33.7M - 2.51.0 1d3b7f56885b 636c3f4d63df 2026-03-10T14:36:57.769 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch ls' 2026-03-10T14:36:57.954 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 7s ago 2m count:1 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter ?:9926 2/2 7s ago 2m * 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:crash 2/2 7s ago 2m * 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 7s ago 2m count:1 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:ingress.nfs.foo 12.12.1.100:9002 1/1 7s ago 24s count:1 2026-03-10T14:36:58.197 INFO:teuthology.orchestra.run.vm00.stdout:mds.foofs 2/2 7s ago 25s count:2 2026-03-10T14:36:58.198 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 7s ago 2m count:2 2026-03-10T14:36:58.198 INFO:teuthology.orchestra.run.vm00.stdout:mon 2/2 7s ago 2m vm00:192.168.123.100=vm00;vm03:192.168.123.103=vm03;count:2 2026-03-10T14:36:58.198 INFO:teuthology.orchestra.run.vm00.stdout:nfs.foo ?:2049 1/1 7s ago 24s count:1 2026-03-10T14:36:58.198 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 2/2 7s ago 2m * 2026-03-10T14:36:58.198 INFO:teuthology.orchestra.run.vm00.stdout:osd.all-available-devices 8 7s ago 81s * 2026-03-10T14:36:58.198 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 7s ago 2m count:1 2026-03-10T14:36:58.260 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch host ls' 2026-03-10T14:36:58.445 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:58.693 INFO:teuthology.orchestra.run.vm00.stdout:HOST ADDR LABELS STATUS 2026-03-10T14:36:58.693 INFO:teuthology.orchestra.run.vm00.stdout:vm00 192.168.123.100 2026-03-10T14:36:58.693 INFO:teuthology.orchestra.run.vm00.stdout:vm03 192.168.123.103 2026-03-10T14:36:58.693 INFO:teuthology.orchestra.run.vm00.stdout:2 hosts in cluster 2026-03-10T14:36:58.760 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch device ls' 2026-03-10T14:36:58.948 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:58.972 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:58 vm00.local ceph-mon[47192]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:58.972 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:58 vm00.local ceph-mon[47192]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:58.972 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:58 vm00.local ceph-mon[47192]: pgmap v77: 97 pgs: 97 active+clean; 460 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 3 op/s 2026-03-10T14:36:59.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:58 vm03.local ceph-mon[54091]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:59.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:58 vm03.local ceph-mon[54091]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:59.215 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:58 vm03.local ceph-mon[54091]: pgmap v77: 97 pgs: 97 active+clean; 460 KiB data, 214 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 2.1 KiB/s wr, 3 op/s 2026-03-10T14:36:59.385 INFO:teuthology.orchestra.run.vm00.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 25s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdb hdd DWNBRSTVMM00001 20.0G No 25s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdc hdd DWNBRSTVMM00002 20.0G No 25s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdd hdd DWNBRSTVMM00003 20.0G No 25s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vde hdd DWNBRSTVMM00004 20.0G No 25s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 26s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdb hdd DWNBRSTVMM03001 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdc hdd DWNBRSTVMM03002 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vdd hdd DWNBRSTVMM03003 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.386 INFO:teuthology.orchestra.run.vm00.stdout:vm03 /dev/vde hdd DWNBRSTVMM03004 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T14:36:59.433 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- bash -c 'ceph orch ls | grep '"'"'^osd.all-available-devices '"'"'' 2026-03-10T14:36:59.606 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:36:59.858 INFO:teuthology.orchestra.run.vm00.stdout:osd.all-available-devices 8 9s ago 83s * 2026-03-10T14:36:59.889 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:59 vm00.local ceph-mon[47192]: from='client.14686 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:59.889 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:36:59 vm00.local ceph-mon[47192]: from='client.14690 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:36:59.913 DEBUG:teuthology.run_tasks:Unwinding manager vip 2026-03-10T14:36:59.916 INFO:tasks.vip:Removing 12.12.0.100 (and any VIPs) on vm00.local iface eth0... 2026-03-10T14:36:59.916 DEBUG:teuthology.orchestra.run.vm00:> sudo ip addr del 12.12.0.100/22 dev eth0 2026-03-10T14:36:59.946 DEBUG:teuthology.orchestra.run.vm00:> sudo ip addr del 12.12.1.100/22 dev eth0 2026-03-10T14:37:00.013 INFO:teuthology.orchestra.run.vm00.stderr:Error: ipv4: Address not found. 2026-03-10T14:37:00.015 DEBUG:teuthology.orchestra.run:got remote process result: 2 2026-03-10T14:37:00.015 INFO:tasks.vip:Removing 12.12.0.103 (and any VIPs) on vm03.local iface eth0... 2026-03-10T14:37:00.015 DEBUG:teuthology.orchestra.run.vm03:> sudo ip addr del 12.12.0.103/22 dev eth0 2026-03-10T14:37:00.045 DEBUG:teuthology.orchestra.run.vm03:> sudo ip addr del 12.12.1.100/22 dev eth0 2026-03-10T14:37:00.106 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:59 vm03.local ceph-mon[54091]: from='client.14686 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:37:00.106 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:36:59 vm03.local ceph-mon[54091]: from='client.14690 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T14:37:00.111 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T14:37:00.114 INFO:tasks.cephadm:Teardown begin 2026-03-10T14:37:00.114 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:37:00.142 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:37:00.177 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T14:37:00.177 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf -- ceph mgr module disable cephadm 2026-03-10T14:37:00.375 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/mon.vm00/config 2026-03-10T14:37:00.394 INFO:teuthology.orchestra.run.vm00.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T14:37:00.418 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T14:37:00.418 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T14:37:00.418 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:37:00.435 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T14:37:00.450 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T14:37:00.450 INFO:tasks.cephadm.mon.vm00:Stopping mon.vm00... 2026-03-10T14:37:00.450 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00 2026-03-10T14:37:00.686 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:37:00 vm00.local systemd[1]: Stopping Ceph mon.vm00 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf... 2026-03-10T14:37:00.686 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:37:00 vm00.local ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00[47188]: 2026-03-10T14:37:00.574+0000 7f388a312640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm00 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:37:00.687 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 14:37:00 vm00.local ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm00[47188]: 2026-03-10T14:37:00.574+0000 7f388a312640 -1 mon.vm00@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T14:37:00.897 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm00.service' 2026-03-10T14:37:00.961 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:37:00.961 INFO:tasks.cephadm.mon.vm00:Stopped mon.vm00 2026-03-10T14:37:00.961 INFO:tasks.cephadm.mon.vm03:Stopping mon.vm03... 2026-03-10T14:37:00.961 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm03 2026-03-10T14:37:01.262 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:37:01 vm03.local systemd[1]: Stopping Ceph mon.vm03 for 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf... 2026-03-10T14:37:01.262 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:37:01 vm03.local ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03[54087]: 2026-03-10T14:37:01.108+0000 7f0e09907640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm03 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T14:37:01.262 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:37:01 vm03.local ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03[54087]: 2026-03-10T14:37:01.108+0000 7f0e09907640 -1 mon.vm03@1(peon) e2 *** Got Signal Terminated *** 2026-03-10T14:37:01.262 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:37:01 vm03.local podman[71514]: 2026-03-10 14:37:01.131796032 +0000 UTC m=+0.037771443 container died d2ba0bf1bcdceb0da5e92e452b42d6a93c39780f2b7750c05f478013becc6581 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.41.3) 2026-03-10T14:37:01.262 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:37:01 vm03.local podman[71514]: 2026-03-10 14:37:01.170845045 +0000 UTC m=+0.076820465 container remove d2ba0bf1bcdceb0da5e92e452b42d6a93c39780f2b7750c05f478013becc6581 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T14:37:01.262 INFO:journalctl@ceph.mon.vm03.vm03.stdout:Mar 10 14:37:01 vm03.local bash[71514]: ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf-mon-vm03 2026-03-10T14:37:01.272 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf@mon.vm03.service' 2026-03-10T14:37:01.338 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T14:37:01.341 INFO:tasks.cephadm.mon.vm03:Stopped mon.vm03 2026-03-10T14:37:01.341 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf --force --keep-logs 2026-03-10T14:37:01.544 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:37:29.518 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf --force --keep-logs 2026-03-10T14:37:29.647 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:38:10.629 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:38:10.659 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T14:38:10.684 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T14:38:10.684 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068/remote/vm00/crash 2026-03-10T14:38:10.685 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/crash -- . 2026-03-10T14:38:10.731 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/crash: Cannot open: No such file or directory 2026-03-10T14:38:10.731 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-10T14:38:10.732 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068/remote/vm03/crash 2026-03-10T14:38:10.733 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/crash -- . 2026-03-10T14:38:10.757 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/crash: Cannot open: No such file or directory 2026-03-10T14:38:10.757 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-10T14:38:10.758 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T14:38:10.758 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_DAEMON_PLACE_FAIL | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T14:38:10.804 INFO:tasks.cephadm:Compressing logs... 2026-03-10T14:38:10.805 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:38:10.847 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:38:10.870 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T14:38:10.870 INFO:teuthology.orchestra.run.vm03.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T14:38:10.871 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-volume.log 2026-03-10T14:38:10.871 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-client.ceph-exporter.vm03.log 2026-03-10T14:38:10.872 INFO:teuthology.orchestra.run.vm00.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T14:38:10.872 INFO:teuthology.orchestra.run.vm00.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T14:38:10.872 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-volume.log: 91.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T14:38:10.872 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mgr.vm03.iylznd.log 2026-03-10T14:38:10.872 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-client.ceph-exporter.vm03.log: 28.6% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-client.ceph-exporter.vm03.log.gz 2026-03-10T14:38:10.873 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mon.vm03.log 2026-03-10T14:38:10.873 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mon.vm00.log 2026-03-10T14:38:10.874 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log 2026-03-10T14:38:10.875 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mgr.vm03.iylznd.log: 90.8% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mgr.vm03.iylznd.log.gz 2026-03-10T14:38:10.875 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.audit.log 2026-03-10T14:38:10.876 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mon.vm03.log: 95.7% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-volume.log.gz 2026-03-10T14:38:10.877 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log 2026-03-10T14:38:10.878 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.audit.log: 90.9% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.audit.log.gz 2026-03-10T14:38:10.879 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.cephadm.log 2026-03-10T14:38:10.879 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log: 82.3% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log.gz 2026-03-10T14:38:10.880 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mon.vm00.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mgr.vm00.qkhroe.log 2026-03-10T14:38:10.880 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log: 83.2% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.log.gz 2026-03-10T14:38:10.881 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.0.log 2026-03-10T14:38:10.881 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.cephadm.log: 81.4% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.cephadm.log.gz 2026-03-10T14:38:10.884 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.audit.log 2026-03-10T14:38:10.887 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.2.log 2026-03-10T14:38:10.890 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mgr.vm00.qkhroe.log: 91.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T14:38:10.892 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.cephadm.log 2026-03-10T14:38:10.892 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.4.log 2026-03-10T14:38:10.895 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.audit.log: 90.8% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.audit.log.gz 2026-03-10T14:38:10.898 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-volume.log 2026-03-10T14:38:10.898 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.cephadm.log: 82.9% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph.cephadm.log.gz 2026-03-10T14:38:10.903 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-client.ceph-exporter.vm00.log 2026-03-10T14:38:10.909 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.6.log 2026-03-10T14:38:10.912 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.1.log 2026-03-10T14:38:10.914 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-client.ceph-exporter.vm00.log: 91.7% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-client.ceph-exporter.vm00.log.gz 2026-03-10T14:38:10.914 INFO:teuthology.orchestra.run.vm00.stderr: 95.8% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-volume.log.gz 2026-03-10T14:38:10.920 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.3.log 2026-03-10T14:38:10.922 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.4.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mds.foofs.vm03.oldwcz.log 2026-03-10T14:38:10.926 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.6.log: 92.1% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mon.vm03.log.gz 2026-03-10T14:38:10.929 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.5.log 2026-03-10T14:38:10.933 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mds.foofs.vm03.oldwcz.log: 83.3% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mds.foofs.vm03.oldwcz.log.gz 2026-03-10T14:38:10.936 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.7.log 2026-03-10T14:38:10.945 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mds.foofs.vm00.icqynv.log 2026-03-10T14:38:10.954 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.7.log: /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mds.foofs.vm00.icqynv.log: 71.7% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mds.foofs.vm00.icqynv.log.gz 2026-03-10T14:38:10.965 INFO:teuthology.orchestra.run.vm03.stderr: 93.6% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.0.log.gz 2026-03-10T14:38:10.974 INFO:teuthology.orchestra.run.vm03.stderr: 93.4% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.2.log.gz 2026-03-10T14:38:10.974 INFO:teuthology.orchestra.run.vm03.stderr: 93.5% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.4.log.gz 2026-03-10T14:38:10.995 INFO:teuthology.orchestra.run.vm03.stderr: 93.4% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.6.log.gz 2026-03-10T14:38:10.997 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-10T14:38:10.997 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.136s 2026-03-10T14:38:10.997 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.213s 2026-03-10T14:38:10.997 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.022s 2026-03-10T14:38:11.031 INFO:teuthology.orchestra.run.vm00.stderr: 89.6% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mgr.vm00.qkhroe.log.gz 2026-03-10T14:38:11.036 INFO:teuthology.orchestra.run.vm00.stderr: 93.7% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.1.log.gz 2026-03-10T14:38:11.042 INFO:teuthology.orchestra.run.vm00.stderr: 93.6% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.5.log.gz 2026-03-10T14:38:11.050 INFO:teuthology.orchestra.run.vm00.stderr: 93.7% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.7.log.gz 2026-03-10T14:38:11.051 INFO:teuthology.orchestra.run.vm00.stderr: 93.8% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-osd.3.log.gz 2026-03-10T14:38:11.059 INFO:teuthology.orchestra.run.vm00.stderr: 91.5% -- replaced with /var/log/ceph/14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf/ceph-mon.vm00.log.gz 2026-03-10T14:38:11.061 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-10T14:38:11.061 INFO:teuthology.orchestra.run.vm00.stderr:real 0m0.199s 2026-03-10T14:38:11.061 INFO:teuthology.orchestra.run.vm00.stderr:user 0m0.348s 2026-03-10T14:38:11.061 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m0.033s 2026-03-10T14:38:11.061 INFO:tasks.cephadm:Archiving logs... 2026-03-10T14:38:11.061 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068/remote/vm00/log 2026-03-10T14:38:11.061 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T14:38:11.144 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068/remote/vm03/log 2026-03-10T14:38:11.144 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T14:38:11.187 INFO:tasks.cephadm:Removing cluster... 2026-03-10T14:38:11.187 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf --force 2026-03-10T14:38:11.312 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:38:11.407 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf --force 2026-03-10T14:38:11.536 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 14451b84-1c8e-11f1-8a0b-8fd3ee4dc1bf 2026-03-10T14:38:11.629 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T14:38:11.629 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T14:38:11.645 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T14:38:11.659 INFO:tasks.cephadm:Teardown complete 2026-03-10T14:38:11.659 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T14:38:11.662 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T14:38:11.662 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:38:11.688 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T14:38:11.701 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-10T14:38:11.704 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T14:38:11.704 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-10T14:38:11.704 INFO:teuthology.orchestra.run.vm00.stdout:^? server1a.meinberg.de 2 7 40 299 -965us[ +796us] +/- 31ms 2026-03-10T14:38:11.704 INFO:teuthology.orchestra.run.vm00.stdout:^+ srv01-nc.securepod.org 2 6 377 34 +2162us[+2158us] +/- 22ms 2026-03-10T14:38:11.704 INFO:teuthology.orchestra.run.vm00.stdout:^* node-4.infogral.is 2 6 377 34 -894us[ -897us] +/- 14ms 2026-03-10T14:38:11.704 INFO:teuthology.orchestra.run.vm00.stdout:^+ gromit.nocabal.de 2 6 377 34 -950us[ -950us] +/- 37ms 2026-03-10T14:38:11.713 INFO:teuthology.orchestra.run.vm03.stderr:bash: line 1: ntpq: command not found 2026-03-10T14:38:11.717 INFO:teuthology.orchestra.run.vm03.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T14:38:11.717 INFO:teuthology.orchestra.run.vm03.stdout:=============================================================================== 2026-03-10T14:38:11.717 INFO:teuthology.orchestra.run.vm03.stdout:^* node-4.infogral.is 2 6 377 36 -904us[ -936us] +/- 14ms 2026-03-10T14:38:11.717 INFO:teuthology.orchestra.run.vm03.stdout:^+ srv01-nc.securepod.org 2 6 377 36 +1928us[+1896us] +/- 22ms 2026-03-10T14:38:11.717 INFO:teuthology.orchestra.run.vm03.stdout:^+ gromit.nocabal.de 2 6 377 35 -952us[ -952us] +/- 37ms 2026-03-10T14:38:11.717 INFO:teuthology.orchestra.run.vm03.stdout:^- server1b.meinberg.de 2 6 177 15 -1429us[-1429us] +/- 33ms 2026-03-10T14:38:11.717 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T14:38:11.719 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T14:38:11.720 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T14:38:11.733 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T14:38:11.735 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T14:38:11.738 INFO:teuthology.task.internal:Duration was 404.719429 seconds 2026-03-10T14:38:11.738 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T14:38:11.740 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T14:38:11.740 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T14:38:11.746 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T14:38:11.787 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T14:38:11.804 INFO:teuthology.orchestra.run.vm03.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T14:38:12.244 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T14:38:12.244 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-10T14:38:12.244 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T14:38:12.268 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-10T14:38:12.268 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T14:38:12.293 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T14:38:12.293 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:38:12.311 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:38:12.703 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T14:38:12.703 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:38:12.705 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T14:38:12.727 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:38:12.727 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:38:12.728 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T14:38:12.728 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:38:12.728 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T14:38:12.728 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T14:38:12.729 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T14:38:12.729 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T14:38:12.729 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T14:38:12.729 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T14:38:12.843 INFO:teuthology.orchestra.run.vm03.stderr: 98.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T14:38:12.847 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 97.8% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T14:38:12.849 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T14:38:12.852 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T14:38:12.852 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T14:38:12.912 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T14:38:12.940 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T14:38:12.943 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:38:12.955 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:38:12.979 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-10T14:38:13.006 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-10T14:38:13.019 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:38:13.050 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:38:13.050 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T14:38:13.075 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T14:38:13.075 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T14:38:13.078 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T14:38:13.079 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068/remote/vm00 2026-03-10T14:38:13.079 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T14:38:13.120 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1068/remote/vm03 2026-03-10T14:38:13.120 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T14:38:13.148 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T14:38:13.148 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T14:38:13.162 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T14:38:13.203 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T14:38:13.206 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T14:38:13.206 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T14:38:13.209 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T14:38:13.209 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T14:38:13.217 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T14:38:13.231 INFO:teuthology.orchestra.run.vm00.stdout: 8532139 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 14:38 /home/ubuntu/cephtest 2026-03-10T14:38:13.261 INFO:teuthology.orchestra.run.vm03.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 14:38 /home/ubuntu/cephtest 2026-03-10T14:38:13.262 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T14:38:13.269 INFO:teuthology.run:Summary data: description: orch/cephadm/smoke-roleless/{0-distro/centos_9.stream 1-start 2-services/nfs-keepalive-only 3-final} duration: 404.7194290161133 owner: kyr success: true 2026-03-10T14:38:13.269 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T14:38:13.288 INFO:teuthology.run:pass