2026-03-10T12:00:06.942 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T12:00:06.945 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T12:00:06.962 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020 branch: squid description: orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 1-start 2-services/nfs2 3-final} email: null first_in_suite: false flavor: default job_id: '1020' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_DAEMON_PLACE_FAIL - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - - host.b - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9PwUu+rivyiUvlZDvdzLXsGPJmM398h/cB/bkucETNAjL0GLLafEzghXR22GQW2ywTOm5HjvclKMPnn5IJIyQ= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDepGTemy4GjYOpXI+7kwex5IXHb1yi/tDXpQh+gI03g7QUSZtqf2CquOvZvKrVUjWRVBL3DwXHuEZIfiOAd1KU= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install runc nvmetcli nvme-cli -y - sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf - sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf - cephadm: roleless: true - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - vip.exec: all-hosts: - systemctl stop nfs-server - cephadm.shell: host.a: - ceph nfs cluster create foo - cephadm.wait_for_service: service: nfs.foo - cephadm.shell: host.a: - stat -c '%u %g' /var/log/ceph | grep '167 167' - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - ceph orch ls | grep '^osd.all-available-devices ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T12:00:06.962 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T12:00:06.963 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T12:00:06.963 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T12:00:06.963 INFO:teuthology.task.internal:Checking packages... 2026-03-10T12:00:06.963 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T12:00:06.963 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T12:00:06.963 INFO:teuthology.packaging:ref: None 2026-03-10T12:00:06.963 INFO:teuthology.packaging:tag: None 2026-03-10T12:00:06.963 INFO:teuthology.packaging:branch: squid 2026-03-10T12:00:06.963 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:00:06.963 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T12:00:07.691 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T12:00:07.692 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T12:00:07.693 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T12:00:07.693 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T12:00:07.693 INFO:teuthology.task.internal:Saving configuration 2026-03-10T12:00:07.697 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T12:00:07.698 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T12:00:07.706 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 11:58:41.187219', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9PwUu+rivyiUvlZDvdzLXsGPJmM398h/cB/bkucETNAjL0GLLafEzghXR22GQW2ywTOm5HjvclKMPnn5IJIyQ='} 2026-03-10T12:00:07.710 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 11:58:41.187710', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDepGTemy4GjYOpXI+7kwex5IXHb1yi/tDXpQh+gI03g7QUSZtqf2CquOvZvKrVUjWRVBL3DwXHuEZIfiOAd1KU='} 2026-03-10T12:00:07.710 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T12:00:07.711 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['host.a', 'client.0'] 2026-03-10T12:00:07.711 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['host.b', 'client.1'] 2026-03-10T12:00:07.711 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T12:00:07.716 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-10T12:00:07.722 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-10T12:00:07.722 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f849abb3e20>, signals=[15]) 2026-03-10T12:00:07.722 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T12:00:07.723 INFO:teuthology.task.internal:Opening connections... 2026-03-10T12:00:07.723 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-10T12:00:07.724 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:00:07.782 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-10T12:00:07.782 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:00:07.839 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T12:00:07.840 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-10T12:00:07.884 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-10T12:00:07.884 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:NAME="CentOS Stream" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:ID="centos" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel fedora" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;31" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://centos.org/" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T12:00:07.938 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T12:00:07.939 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-10T12:00:07.942 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-10T12:00:07.956 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-10T12:00:07.956 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:NAME="CentOS Stream" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="9" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:ID="centos" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE="rhel fedora" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="9" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:PLATFORM_ID="platform:el9" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:ANSI_COLOR="0;31" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:LOGO="fedora-logo-icon" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://centos.org/" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T12:00:08.010 INFO:teuthology.orchestra.run.vm09.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T12:00:08.010 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-10T12:00:08.014 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T12:00:08.016 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T12:00:08.017 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T12:00:08.017 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-10T12:00:08.019 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-10T12:00:08.064 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T12:00:08.065 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T12:00:08.065 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-10T12:00:08.072 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-10T12:00:08.084 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T12:00:08.118 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T12:00:08.118 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T12:00:08.126 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-10T12:00:08.139 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:00:08.317 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-10T12:00:08.331 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:00:08.513 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T12:00:08.514 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T12:00:08.514 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T12:00:08.516 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T12:00:08.531 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T12:00:08.532 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T12:00:08.533 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T12:00:08.533 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T12:00:08.571 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T12:00:08.587 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T12:00:08.588 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T12:00:08.588 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T12:00:08.637 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:00:08.637 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T12:00:08.650 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:00:08.650 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T12:00:08.679 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T12:00:08.702 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:00:08.711 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:00:08.714 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:00:08.722 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T12:00:08.724 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T12:00:08.725 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T12:00:08.725 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T12:00:08.754 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T12:00:08.788 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T12:00:08.790 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T12:00:08.791 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T12:00:08.819 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T12:00:08.843 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:00:08.898 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:00:08.955 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:00:08.955 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T12:00:09.014 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:00:09.035 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:00:09.089 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:00:09.089 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T12:00:09.149 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-10T12:00:09.150 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-10T12:00:09.174 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T12:00:09.219 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T12:00:09.479 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T12:00:09.480 INFO:teuthology.task.internal:Starting timer... 2026-03-10T12:00:09.480 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T12:00:09.483 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T12:00:09.485 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0']} 2026-03-10T12:00:09.485 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-10T12:00:09.485 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-10T12:00:09.485 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T12:00:09.485 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T12:00:09.485 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T12:00:09.485 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T12:00:09.487 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T12:00:09.487 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T12:00:09.489 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T12:00:10.078 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T12:00:10.084 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T12:00:10.084 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryf4vguocp --limit vm00.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T12:02:09.204 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm09.local')] 2026-03-10T12:02:09.205 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-10T12:02:09.205 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:02:09.273 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-10T12:02:09.359 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-10T12:02:09.359 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-10T12:02:09.359 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T12:02:09.426 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-10T12:02:09.510 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-10T12:02:09.510 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T12:02:09.512 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T12:02:09.512 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T12:02:09.513 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:02:09.514 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T12:02:09.514 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:02:09.563 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T12:02:09.582 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T12:02:09.588 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T12:02:09.610 INFO:teuthology.orchestra.run.vm09.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T12:02:09.612 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-03-10T12:02:09.628 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-03-10T12:02:09.643 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T12:02:09.647 INFO:teuthology.orchestra.run.vm09.stderr:sudo: ntpd: command not found 2026-03-10T12:02:09.659 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T12:02:09.665 INFO:teuthology.orchestra.run.vm09.stdout:506 Cannot talk to daemon 2026-03-10T12:02:09.683 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T12:02:09.705 INFO:teuthology.orchestra.run.vm09.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T12:02:09.710 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-10T12:02:09.714 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T12:02:09.715 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-10T12:02:09.762 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T12:02:09.765 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T12:02:09.765 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T12:02:09.765 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T12:02:09.768 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T12:02:09.768 DEBUG:teuthology.orchestra.run.vm00:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T12:02:09.769 DEBUG:teuthology.orchestra.run.vm09:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T12:02:09.771 DEBUG:teuthology.task.pexec:ubuntu@vm00.local< sudo dnf remove nvme-cli -y 2026-03-10T12:02:09.771 DEBUG:teuthology.task.pexec:ubuntu@vm00.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T12:02:09.771 DEBUG:teuthology.task.pexec:ubuntu@vm00.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.771 DEBUG:teuthology.task.pexec:ubuntu@vm00.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.771 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm00.local 2026-03-10T12:02:09.771 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T12:02:09.771 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T12:02:09.771 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.771 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.809 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf remove nvme-cli -y 2026-03-10T12:02:09.809 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T12:02:09.809 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.809 DEBUG:teuthology.task.pexec:ubuntu@vm09.local< sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.809 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm09.local 2026-03-10T12:02:09.809 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T12:02:09.809 INFO:teuthology.task.pexec:sudo dnf install runc nvmetcli nvme-cli -y 2026-03-10T12:02:09.809 INFO:teuthology.task.pexec:sudo sed -i 's/^#runtime = "crun"/runtime = "runc"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:09.809 INFO:teuthology.task.pexec:sudo sed -i 's/runtime = "crun"/#runtime = "crun"/g' /usr/share/containers/containers.conf 2026-03-10T12:02:10.023 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: nvme-cli 2026-03-10T12:02:10.023 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-10T12:02:10.026 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-10T12:02:10.027 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-10T12:02:10.027 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-10T12:02:10.052 INFO:teuthology.orchestra.run.vm09.stdout:No match for argument: nvme-cli 2026-03-10T12:02:10.052 INFO:teuthology.orchestra.run.vm09.stderr:No packages marked for removal. 2026-03-10T12:02:10.059 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T12:02:10.060 INFO:teuthology.orchestra.run.vm09.stdout:Nothing to do. 2026-03-10T12:02:10.060 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T12:02:10.542 INFO:teuthology.orchestra.run.vm00.stdout:Last metadata expiration check: 0:01:05 ago on Tue 10 Mar 2026 12:01:05 PM UTC. 2026-03-10T12:02:10.628 INFO:teuthology.orchestra.run.vm09.stdout:Last metadata expiration check: 0:01:19 ago on Tue 10 Mar 2026 12:00:51 PM UTC. 2026-03-10T12:02:10.669 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Install 7 Packages 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 6.3 M 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Installed size: 24 M 2026-03-10T12:02:10.670 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-03-10T12:02:10.760 INFO:teuthology.orchestra.run.vm09.stdout:Dependencies resolved. 2026-03-10T12:02:10.760 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T12:02:10.760 INFO:teuthology.orchestra.run.vm09.stdout: Package Arch Version Repository Size 2026-03-10T12:02:10.760 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T12:02:10.760 INFO:teuthology.orchestra.run.vm09.stdout:Installing: 2026-03-10T12:02:10.760 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: runc x86_64 4:1.4.0-2.el9 appstream 4.0 M 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:Installing dependencies: 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:Transaction Summary 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:================================================================================ 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:Install 7 Packages 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:Total download size: 6.3 M 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:Installed size: 24 M 2026-03-10T12:02:10.761 INFO:teuthology.orchestra.run.vm09.stdout:Downloading Packages: 2026-03-10T12:02:11.326 INFO:teuthology.orchestra.run.vm00.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 543 kB/s | 44 kB 00:00 2026-03-10T12:02:11.347 INFO:teuthology.orchestra.run.vm00.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 701 kB/s | 72 kB 00:00 2026-03-10T12:02:11.363 INFO:teuthology.orchestra.run.vm00.stdout:(3/7): python3-kmod-0.9-32.el9.x86_64.rpm 2.2 MB/s | 84 kB 00:00 2026-03-10T12:02:11.387 INFO:teuthology.orchestra.run.vm00.stdout:(4/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 3.7 MB/s | 150 kB 00:00 2026-03-10T12:02:11.410 INFO:teuthology.orchestra.run.vm00.stdout:(5/7): nvme-cli-2.16-1.el9.x86_64.rpm 7.0 MB/s | 1.2 MB 00:00 2026-03-10T12:02:11.429 INFO:teuthology.orchestra.run.vm00.stdout:(6/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 12 MB/s | 837 kB 00:00 2026-03-10T12:02:11.614 INFO:teuthology.orchestra.run.vm09.stdout:(1/7): nvmetcli-0.8-3.el9.noarch.rpm 244 kB/s | 44 kB 00:00 2026-03-10T12:02:11.689 INFO:teuthology.orchestra.run.vm09.stdout:(2/7): python3-configshell-1.1.30-1.el9.noarch. 282 kB/s | 72 kB 00:00 2026-03-10T12:02:11.703 INFO:teuthology.orchestra.run.vm09.stdout:(3/7): nvme-cli-2.16-1.el9.x86_64.rpm 4.3 MB/s | 1.2 MB 00:00 2026-03-10T12:02:11.764 INFO:teuthology.orchestra.run.vm09.stdout:(4/7): python3-kmod-0.9-32.el9.x86_64.rpm 560 kB/s | 84 kB 00:00 2026-03-10T12:02:12.117 INFO:teuthology.orchestra.run.vm09.stdout:(5/7): runc-1.4.0-2.el9.x86_64.rpm 11 MB/s | 4.0 MB 00:00 2026-03-10T12:02:12.487 INFO:teuthology.orchestra.run.vm00.stdout:(7/7): runc-1.4.0-2.el9.x86_64.rpm 3.6 MB/s | 4.0 MB 00:01 2026-03-10T12:02:12.487 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T12:02:12.487 INFO:teuthology.orchestra.run.vm00.stdout:Total 3.5 MB/s | 6.3 MB 00:01 2026-03-10T12:02:12.487 INFO:teuthology.orchestra.run.vm09.stdout:(6/7): python3-pyparsing-2.4.7-9.el9.noarch.rpm 189 kB/s | 150 kB 00:00 2026-03-10T12:02:12.575 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-10T12:02:12.583 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-10T12:02:12.583 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-10T12:02:12.664 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-10T12:02:12.667 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-10T12:02:12.895 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-10T12:02:12.915 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T12:02:12.932 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T12:02:12.943 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T12:02:13.013 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T12:02:13.115 INFO:teuthology.orchestra.run.vm00.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T12:02:13.178 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T12:02:13.451 INFO:teuthology.orchestra.run.vm00.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T12:02:13.457 INFO:teuthology.orchestra.run.vm00.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T12:02:13.888 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T12:02:13.888 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T12:02:13.888 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:02:14.558 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T12:02:14.558 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T12:02:14.558 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T12:02:14.558 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T12:02:14.558 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T12:02:14.559 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T12:02:14.655 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:02:14.656 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-10T12:02:14.707 INFO:teuthology.orchestra.run.vm09.stdout:(7/7): python3-urwid-2.1.2-4.el9.x86_64.rpm 279 kB/s | 837 kB 00:03 2026-03-10T12:02:14.710 INFO:teuthology.orchestra.run.vm09.stdout:-------------------------------------------------------------------------------- 2026-03-10T12:02:14.711 INFO:teuthology.orchestra.run.vm09.stdout:Total 1.6 MB/s | 6.3 MB 00:03 2026-03-10T12:02:14.780 DEBUG:teuthology.parallel:result is None 2026-03-10T12:02:14.817 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction check 2026-03-10T12:02:14.829 INFO:teuthology.orchestra.run.vm09.stdout:Transaction check succeeded. 2026-03-10T12:02:14.830 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction test 2026-03-10T12:02:14.918 INFO:teuthology.orchestra.run.vm09.stdout:Transaction test succeeded. 2026-03-10T12:02:14.919 INFO:teuthology.orchestra.run.vm09.stdout:Running transaction 2026-03-10T12:02:15.149 INFO:teuthology.orchestra.run.vm09.stdout: Preparing : 1/1 2026-03-10T12:02:15.167 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/7 2026-03-10T12:02:15.182 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/7 2026-03-10T12:02:15.190 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T12:02:15.200 INFO:teuthology.orchestra.run.vm09.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T12:02:15.203 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T12:02:15.267 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/7 2026-03-10T12:02:15.450 INFO:teuthology.orchestra.run.vm09.stdout: Installing : runc-4:1.4.0-2.el9.x86_64 6/7 2026-03-10T12:02:15.460 INFO:teuthology.orchestra.run.vm09.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T12:02:15.896 INFO:teuthology.orchestra.run.vm09.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 7/7 2026-03-10T12:02:15.896 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T12:02:15.896 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:02:16.551 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/7 2026-03-10T12:02:16.551 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/7 2026-03-10T12:02:16.551 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/7 2026-03-10T12:02:16.551 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/7 2026-03-10T12:02:16.551 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/7 2026-03-10T12:02:16.552 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/7 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: Verifying : runc-4:1.4.0-2.el9.x86_64 7/7 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout:Installed: 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: runc-4:1.4.0-2.el9.x86_64 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:02:16.685 INFO:teuthology.orchestra.run.vm09.stdout:Complete! 2026-03-10T12:02:16.799 DEBUG:teuthology.parallel:result is None 2026-03-10T12:02:16.799 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T12:02:16.849 INFO:tasks.cephadm:Config: {'roleless': True, 'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_DAEMON_PLACE_FAIL', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T12:02:16.849 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:02:16.849 INFO:tasks.cephadm:Cluster fsid is fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:02:16.849 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T12:02:16.849 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-10T12:02:16.849 INFO:tasks.cephadm:Monitor IPs: {'mon.vm00': '192.168.123.100', 'mon.vm09': '192.168.123.109'} 2026-03-10T12:02:16.849 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T12:02:16.849 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-10T12:02:16.885 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-10T12:02:16.928 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T12:02:16.928 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:02:17.632 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T12:02:18.201 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:02:18.202 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T12:02:18.202 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T12:02:18.202 DEBUG:teuthology.orchestra.run.vm00:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T12:02:19.733 INFO:teuthology.orchestra.run.vm00.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 12:02 /home/ubuntu/cephtest/cephadm 2026-03-10T12:02:19.733 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T12:02:21.198 INFO:teuthology.orchestra.run.vm09.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 12:02 /home/ubuntu/cephtest/cephadm 2026-03-10T12:02:21.198 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T12:02:21.224 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T12:02:21.250 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T12:02:21.250 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T12:02:21.268 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T12:02:21.471 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:02:21.524 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:02:56.825 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-10T12:02:56.825 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T12:02:56.825 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T12:02:56.825 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-10T12:02:56.825 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T12:02:56.826 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-10T12:02:56.826 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-10T12:03:11.169 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T12:03:11.205 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-10T12:03:11.236 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-10T12:03:11.276 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-10T12:03:11.306 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-10T12:03:11.345 INFO:tasks.cephadm:Writing seed config... 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T12:03:11.346 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T12:03:11.346 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:03:11.346 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T12:03:11.364 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = fba12862-1c78-11f1-b92d-892b8c98a56b [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T12:03:11.364 DEBUG:teuthology.orchestra.run.vm00:mon.vm00> sudo journalctl -f -n 0 -u ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00.service 2026-03-10T12:03:11.406 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T12:03:11.406 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid fba12862-1c78-11f1-b92d-892b8c98a56b --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:03:11.558 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T12:03:11.558 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'fba12862-1c78-11f1-b92d-892b8c98a56b', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-10T12:03:11.558 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T12:03:11.558 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-10T12:03:11.579 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stdout 5.8.0 2026-03-10T12:03:11.579 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-10T12:03:11.579 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-10T12:03:11.586 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T12:03:11.586 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T12:03:11.592 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T12:03:11.592 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T12:03:11.599 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T12:03:11.605 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T12:03:11.605 INFO:teuthology.orchestra.run.vm00.stdout:Unit chronyd.service is enabled and running 2026-03-10T12:03:11.605 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-10T12:03:11.627 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stdout 5.8.0 2026-03-10T12:03:11.627 INFO:teuthology.orchestra.run.vm00.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T12:03:11.627 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-10T12:03:11.627 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-10T12:03:11.634 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T12:03:11.634 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T12:03:11.643 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T12:03:11.643 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T12:03:11.652 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:Unit chronyd.service is enabled and running 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 140112596047472 on /run/cephadm/fba12862-1c78-11f1-b92d-892b8c98a56b.lock 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:Lock 140112596047472 acquired on /run/cephadm/fba12862-1c78-11f1-b92d-892b8c98a56b.lock 2026-03-10T12:03:11.660 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-10T12:03:11.661 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-10T12:03:11.661 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T12:03:11.665 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.100 metric 100 2026-03-10T12:03:11.665 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T12:03:11.668 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T12:03:11.668 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link noprefixroute 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T12:03:11.671 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T12:03:11.672 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T12:03:11.672 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T12:03:12.954 INFO:teuthology.orchestra.run.vm00.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T12:03:13.354 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T12:03:13.354 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T12:03:13.354 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T12:03:13.572 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-10T12:03:13.572 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-10T12:03:13.811 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCBCLBpwIf3KBAA4JWHYtngCshgdRWXHbPG7A== 2026-03-10T12:03:14.059 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCBCLBpJY5vNxAA+ePv2e6Y3RTTF0YumUvyBQ== 2026-03-10T12:03:14.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCCCLBpWr3KCRAA9NQNUKrS+3zhN9iAt4qwtQ== 2026-03-10T12:03:14.309 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for vm00 [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:14.578 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-10T12:03:14.834 INFO:teuthology.orchestra.run.vm00.stdout:create mon.vm00 on 2026-03-10T12:03:15.121 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T12:03:15.262 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b.target → /etc/systemd/system/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b.target. 2026-03-10T12:03:15.262 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b.target → /etc/systemd/system/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b.target. 2026-03-10T12:03:15.420 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00 2026-03-10T12:03:15.420 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00.service: Unit ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00.service not loaded. 2026-03-10T12:03:15.560 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b.target.wants/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00.service → /etc/systemd/system/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@.service. 2026-03-10T12:03:15.761 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T12:03:15.761 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T12:03:15.761 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-10T12:03:15.761 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm00 (age 0.181889s) 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-10T12:03:16.134 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T12:03:16.135 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T12:03:16.135 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T12:03:16.135 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T12:03:16.135 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:16.135 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-10T12:03:16.135 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T12:03:16.481 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-10T12:03:16.807 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-10T12:03:17.360 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 systemd[1]: Starting Ceph mon.vm00 for fba12862-1c78-11f1-b92d-892b8c98a56b... 2026-03-10T12:03:17.415 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 podman[49171]: 2026-03-10 12:03:17.361697894 +0000 UTC m=+0.017778066 container create 50ae03124fd82e3054d5dcb50874afd67201990ad6cbfe4cab03de2481766055 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm00, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True) 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 podman[49171]: 2026-03-10 12:03:17.401584083 +0000 UTC m=+0.057664265 container init 50ae03124fd82e3054d5dcb50874afd67201990ad6cbfe4cab03de2481766055 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm00, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223) 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 podman[49171]: 2026-03-10 12:03:17.406818437 +0000 UTC m=+0.062898599 container start 50ae03124fd82e3054d5dcb50874afd67201990ad6cbfe4cab03de2481766055 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm00, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 bash[49171]: 50ae03124fd82e3054d5dcb50874afd67201990ad6cbfe4cab03de2481766055 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 podman[49171]: 2026-03-10 12:03:17.355316643 +0000 UTC m=+0.011396815 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 systemd[1]: Started Ceph mon.vm00 for fba12862-1c78-11f1-b92d-892b8c98a56b. 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 6 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: pidfile_write: ignore empty --pid-file 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: load: jerasure load: lrc 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: RocksDB version: 7.9.2 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Git sha 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: DB SUMMARY 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: DB Session ID: C6ZLYERLHG17H91DUZDY 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: CURRENT file: CURRENT 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm00/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm00/store.db: 000009.log size: 87793 ; 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.error_if_exists: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.create_if_missing: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.paranoid_checks: 1 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.env: 0x5621e2cd4dc0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.info_log: 0x5621e42dade0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.statistics: (nil) 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.use_fsync: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_log_file_size: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.allow_fallocate: 1 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.use_direct_reads: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.db_log_dir: 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.wal_dir: 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T12:03:17.632 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.write_buffer_manager: 0x5621e42df900 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.unordered_write: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.row_cache: None 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.wal_filter: None 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.two_write_queues: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.wal_compression: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.atomic_flush: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.log_readahead_size: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_background_jobs: 2 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_background_compactions: -1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_subcompactions: 1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_open_files: -1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_background_flushes: -1 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Compression algorithms supported: 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kZSTD supported: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kXpressCompression supported: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kBZip2Compression supported: 0 2026-03-10T12:03:17.633 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kLZ4Compression supported: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kZlibCompression supported: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: kSnappyCompression supported: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm00/store.db/MANIFEST-000010 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.merge_operator: 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_filter: None 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5621e42da5c0) 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: cache_index_and_filter_blocks: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: pin_top_level_index_and_filter: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: index_type: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: data_block_index_type: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: index_shortening: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: checksum: 4 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: no_block_cache: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache: 0x5621e42ff350 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache_name: BinnedLRUCache 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache_options: 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: capacity : 536870912 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: num_shard_bits : 4 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: strict_capacity_limit : 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: high_pri_pool_ratio: 0.000 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_cache_compressed: (nil) 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: persistent_cache: (nil) 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_size: 4096 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_size_deviation: 10 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_restart_interval: 16 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: index_block_restart_interval: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: metadata_block_size: 4096 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: partition_filters: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: use_delta_encoding: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: filter_policy: bloomfilter 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: whole_key_filtering: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: verify_compression: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: read_amp_bytes_per_bit: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: format_version: 5 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: enable_index_compression: 1 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: block_align: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: max_auto_readahead_size: 262144 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: prepopulate_block_cache: 0 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: initial_auto_readahead_size: 8192 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression: NoCompression 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T12:03:17.634 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.num_levels: 7 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.inplace_update_support: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.bloom_locality: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.max_successive_merges: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T12:03:17.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.ttl: 2592000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.enable_blob_files: false 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.min_blob_size: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm00/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 60e808a8-51d8-40ee-8404-2a2691ec3e5d 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773144197435003, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773144197436976, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 84758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 244, "table_properties": {"data_size": 82910, "index_size": 237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10018, "raw_average_key_size": 47, "raw_value_size": 77225, "raw_average_value_size": 365, "num_data_blocks": 11, "num_entries": 211, "num_filter_entries": 211, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773144197, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "60e808a8-51d8-40ee-8404-2a2691ec3e5d", "db_session_id": "C6ZLYERLHG17H91DUZDY", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773144197437047, "job": 1, "event": "recovery_finished"} 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm00/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5621e4300e00 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: DB pointer 0x5621e440c000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** DB Stats ** 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** Compaction Stats [default] ** 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: L0 2/0 84.65 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 47.5 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Sum 2/0 84.65 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 47.5 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 47.5 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** Compaction Stats [default] ** 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 47.5 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Cumulative compaction: 0.00 GB write, 10.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Interval compaction: 0.00 GB write, 10.27 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Block cache BinnedLRUCache@0x5621e42ff350#6 capacity: 512.00 MB usage: 6.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: Block cache entry stats(count,size,portion): DataBlock(2,5.03 KB,0.000959635%) FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.42 KB,8.04663e-05%) Misc(1,0.00 KB,0%) 2026-03-10T12:03:17.636 INFO:journalctl@ceph.mon.vm00.vm00.stdout: 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: mon.vm00 is new leader, mons vm00 in quorum (ranks 0) 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: monmap epoch 1 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: fsid fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: last_changed 2026-03-10T12:03:14.428878+0000 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: created 2026-03-10T12:03:14.428878+0000 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: min_mon_release 19 (squid) 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: election_strategy: 1 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.vm00 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: fsmap 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T12:03:17.637 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:17 vm00 ceph-mon[49203]: mgrmap e1: no daemons active 2026-03-10T12:03:17.776 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T12:03:17.776 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:03:17.776 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-10T12:03:17.777 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T12:03:17.778 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T12:03:17.778 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-10T12:03:17.940 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mgr.vm00.pahkwb 2026-03-10T12:03:17.940 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mgr.vm00.pahkwb.service: Unit ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mgr.vm00.pahkwb.service not loaded. 2026-03-10T12:03:18.085 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b.target.wants/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mgr.vm00.pahkwb.service → /etc/systemd/system/ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@.service. 2026-03-10T12:03:18.279 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T12:03:18.279 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T12:03:18.279 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T12:03:18.279 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-10T12:03:18.279 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-10T12:03:18.279 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "fba12862-1c78-11f1-b92d-892b8c98a56b", 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "vm00" 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 1, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T12:03:18.628 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T12:03:15:796358+0000", 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:18.629 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T12:03:15.797263+0000", 2026-03-10T12:03:18.630 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:03:18.630 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:18.630 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T12:03:18.630 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:18.630 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-10T12:03:18.917 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:18 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1301860875' entity='client.admin' 2026-03-10T12:03:18.917 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:18 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2747785845' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:03:20.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:20 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1379288849' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "fba12862-1c78-11f1-b92d-892b8c98a56b", 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "vm00" 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T12:03:20.997 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T12:03:15:796358+0000", 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T12:03:20.998 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T12:03:15.797263+0000", 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:20.999 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-10T12:03:21.925 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: Activating manager daemon vm00.pahkwb 2026-03-10T12:03:21.925 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: mgrmap e2: vm00.pahkwb(active, starting, since 0.00471452s) 2026-03-10T12:03:22.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:03:22.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:03:22.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:03:22.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm00.pahkwb", "id": "vm00.pahkwb"}]: dispatch 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: Manager daemon vm00.pahkwb is now available 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:22.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:21 vm00 ceph-mon[49203]: from='mgr.14100 192.168.123.100:0/1743854710' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/trash_purge_schedule"}]: dispatch 2026-03-10T12:03:23.439 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "fba12862-1c78-11f1-b92d-892b8c98a56b", 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "vm00" 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T12:03:23.440 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T12:03:15:796358+0000", 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T12:03:15.797263+0000", 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:23.441 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-10T12:03:23.823 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:23 vm00 ceph-mon[49203]: mgrmap e3: vm00.pahkwb(active, since 1.01043s) 2026-03-10T12:03:23.824 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:23 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2874715699' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T12:03:23.824 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:23 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1036226598' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T12:03:23.835 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:23.835 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T12:03:23.836 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-10T12:03:24.992 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:24 vm00 ceph-mon[49203]: mgrmap e4: vm00.pahkwb(active, since 2s) 2026-03-10T12:03:24.992 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:24 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/877591209' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "vm00.pahkwb", 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T12:03:25.281 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 5... 2026-03-10T12:03:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:25 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/877591209' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T12:03:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:25 vm00 ceph-mon[49203]: mgrmap e5: vm00.pahkwb(active, since 3s) 2026-03-10T12:03:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:25 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2808293169' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T12:03:29.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:29 vm00 ceph-mon[49203]: Active manager daemon vm00.pahkwb restarted 2026-03-10T12:03:29.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:29 vm00 ceph-mon[49203]: Activating manager daemon vm00.pahkwb 2026-03-10T12:03:30.382 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:30.382 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T12:03:30.382 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T12:03:30.382 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:30.382 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 5 is available 2026-03-10T12:03:30.382 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: mgrmap e6: vm00.pahkwb(active, starting, since 1.09737s) 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm00.pahkwb", "id": "vm00.pahkwb"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: Manager daemon vm00.pahkwb is now available 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: Found migration_current of "None". Setting to last migration. 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/trash_purge_schedule"}]: dispatch 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:30.506 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:30 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:31.184 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T12:03:31.184 INFO:teuthology.orchestra.run.vm00.stdout:Generating ssh key... 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: mgrmap e7: vm00.pahkwb(active, since 2s) 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:30] ENGINE Bus STARTING 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:30] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:30] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:30] ENGINE Bus STARTED 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:30] ENGINE Client ('192.168.123.100', 48634) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:31.304 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:31 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:03:32.482 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:32 vm00 ceph-mon[49203]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:32.482 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:32 vm00 ceph-mon[49203]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:32.482 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:32 vm00 ceph-mon[49203]: Generating ssh key... 2026-03-10T12:03:32.482 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:32 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:32.482 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:32 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:32.808 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/M4UGmaV0Vh5MDiFqG4Vy+SFc1zIj8u3wBmrJ+fXaQ ceph-fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:32.808 INFO:teuthology.orchestra.run.vm00.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T12:03:32.808 INFO:teuthology.orchestra.run.vm00.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T12:03:32.808 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-10T12:03:33.849 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:33 vm00 ceph-mon[49203]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:34.781 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-10T12:03:34.781 INFO:teuthology.orchestra.run.vm00.stdout:Deploying mon service with default placement... 2026-03-10T12:03:34.891 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:34 vm00 ceph-mon[49203]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:34.891 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:34 vm00 ceph-mon[49203]: Deploying cephadm binary to vm00 2026-03-10T12:03:34.891 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:34 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:34.891 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:34 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:03:35.191 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T12:03:35.191 INFO:teuthology.orchestra.run.vm00.stdout:Deploying mgr service with default placement... 2026-03-10T12:03:35.597 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T12:03:35.597 INFO:teuthology.orchestra.run.vm00.stdout:Deploying crash service with default placement... 2026-03-10T12:03:35.712 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:35 vm00 ceph-mon[49203]: Added host vm00 2026-03-10T12:03:35.712 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:35 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:35.712 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:35 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:36.039 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-10T12:03:36.039 INFO:teuthology.orchestra.run.vm00.stdout:Deploying ceph-exporter service with default placement... 2026-03-10T12:03:36.483 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-10T12:03:36.483 INFO:teuthology.orchestra.run.vm00.stdout:Deploying prometheus service with default placement... 2026-03-10T12:03:36.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:36.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: Saving service mon spec with placement count:5 2026-03-10T12:03:36.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:36.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: Saving service mgr spec with placement count:2 2026-03-10T12:03:36.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:36.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:36.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:36 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:37.024 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-10T12:03:37.024 INFO:teuthology.orchestra.run.vm00.stdout:Deploying grafana service with default placement... 2026-03-10T12:03:37.920 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-10T12:03:37.920 INFO:teuthology.orchestra.run.vm00.stdout:Deploying node-exporter service with default placement... 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: Saving service crash spec with placement * 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: Saving service ceph-exporter spec with placement * 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: Saving service prometheus spec with placement count:1 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:38.184 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:37 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:38.327 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-10T12:03:38.327 INFO:teuthology.orchestra.run.vm00.stdout:Deploying alertmanager service with default placement... 2026-03-10T12:03:38.901 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: Saving service grafana spec with placement count:1 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: Saving service node-exporter spec with placement * 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: Saving service alertmanager spec with placement count:1 2026-03-10T12:03:38.990 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:38 vm00 ceph-mon[49203]: from='mgr.14118 192.168.123.100:0/2770729931' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:39.739 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-10T12:03:40.155 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:40 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1758463831' entity='client.admin' 2026-03-10T12:03:40.155 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:40 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3838013533' entity='client.admin' 2026-03-10T12:03:40.155 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:40 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2433302616' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "vm00.pahkwb", 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T12:03:41.098 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 8... 2026-03-10T12:03:41.803 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:41 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2433302616' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T12:03:41.803 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:41 vm00 ceph-mon[49203]: mgrmap e8: vm00.pahkwb(active, since 12s) 2026-03-10T12:03:41.803 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:41 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2410827955' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: Active manager daemon vm00.pahkwb restarted 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: Activating manager daemon vm00.pahkwb 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: mgrmap e9: vm00.pahkwb(active, starting, since 0.00456763s) 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm00.pahkwb", "id": "vm00.pahkwb"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: Manager daemon vm00.pahkwb is now available 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:03:44.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:44 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:03:45.172 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T12:03:45.172 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T12:03:45.172 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T12:03:45.172 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T12:03:45.172 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 8 is available 2026-03-10T12:03:45.172 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-10T12:03:45.278 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:45 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/trash_purge_schedule"}]: dispatch 2026-03-10T12:03:45.278 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:45 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:45.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:45 vm00 ceph-mon[49203]: mgrmap e10: vm00.pahkwb(active, since 1.00836s) 2026-03-10T12:03:45.745 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T12:03:45.745 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-10T12:03:46.287 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$ajhMAc6rAjm2l6G755c1BewC/s/jkWWt.ZYzK25r65hCrE2hmJUXC", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773144226, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T12:03:46.287 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:45] ENGINE Bus STARTING 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:45] ENGINE Client ('192.168.123.100', 33214) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:45] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:45] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: [10/Mar/2026:12:03:45] ENGINE Bus STARTED 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:46.356 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:46 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:47.223 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T12:03:47.223 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T12:03:47.223 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout: Password: 7wf53bvgje 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.225 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config directory 2026-03-10T12:03:47.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:47 vm00 ceph-mon[49203]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:47.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:47 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:47.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:47 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1974214826' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T12:03:47.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:47 vm00 ceph-mon[49203]: mgrmap e11: vm00.pahkwb(active, since 2s) 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.682 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:47.683 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-10T12:03:47.719 INFO:tasks.cephadm:Fetching config... 2026-03-10T12:03:47.719 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:03:47.719 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T12:03:47.741 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T12:03:47.741 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:03:47.741 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T12:03:47.813 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T12:03:47.814 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:03:47.814 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/keyring of=/dev/stdout 2026-03-10T12:03:47.885 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T12:03:47.885 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:03:47.885 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T12:03:47.944 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T12:03:47.944 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/M4UGmaV0Vh5MDiFqG4Vy+SFc1zIj8u3wBmrJ+fXaQ ceph-fba12862-1c78-11f1-b92d-892b8c98a56b' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T12:03:48.054 INFO:teuthology.orchestra.run.vm00.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/M4UGmaV0Vh5MDiFqG4Vy+SFc1zIj8u3wBmrJ+fXaQ ceph-fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:48.077 DEBUG:teuthology.orchestra.run.vm09:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/M4UGmaV0Vh5MDiFqG4Vy+SFc1zIj8u3wBmrJ+fXaQ ceph-fba12862-1c78-11f1-b92d-892b8c98a56b' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T12:03:48.109 INFO:teuthology.orchestra.run.vm09.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE/M4UGmaV0Vh5MDiFqG4Vy+SFc1zIj8u3wBmrJ+fXaQ ceph-fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:03:48.118 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T12:03:48.311 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:03:48.603 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:48 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/54287241' entity='client.admin' 2026-03-10T12:03:48.764 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T12:03:48.764 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T12:03:49.078 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/884116060' entity='client.admin' 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:03:49.568 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:49 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:49.617 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-10T12:03:49.617 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:03:49.617 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-10T12:03:49.634 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:03:49.634 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:03:49.691 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-10T12:03:49.691 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch host add vm09 2026-03-10T12:03:49.895 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:03:50.741 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:50 vm00 ceph-mon[49203]: Deploying daemon ceph-exporter.vm00 on vm00 2026-03-10T12:03:50.741 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:50 vm00 ceph-mon[49203]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: mgrmap e12: vm00.pahkwb(active, since 6s) 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:03:51.839 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:51 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:03:51.887 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-10T12:03:52.164 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch host ls --format=json 2026-03-10T12:03:52.532 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:03:52.843 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:52 vm00 ceph-mon[49203]: Deploying cephadm binary to vm09 2026-03-10T12:03:52.843 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:52 vm00 ceph-mon[49203]: Deploying daemon crash.vm00 on vm00 2026-03-10T12:03:52.843 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:52 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:52.843 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:52 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:52.843 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:52 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:52.843 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:03:52.843 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-10T12:03:53.061 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T12:03:53.061 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd crush tunables default 2026-03-10T12:03:53.352 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:03:53.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:53 vm00 ceph-mon[49203]: Added host vm09 2026-03-10T12:03:53.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:53 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:53.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:53 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:53.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:53 vm00 ceph-mon[49203]: Deploying daemon node-exporter.vm00 on vm00 2026-03-10T12:03:54.593 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-10T12:03:54.817 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:54 vm00 ceph-mon[49203]: from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:03:54.817 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:54 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1078789207' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T12:03:54.817 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:54 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:54.852 INFO:tasks.cephadm:Adding mon.vm00 on vm00 2026-03-10T12:03:54.852 INFO:tasks.cephadm:Adding mon.vm09 on vm09 2026-03-10T12:03:54.853 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch apply mon '2;vm00:192.168.123.100=vm00;vm09:192.168.123.109=vm09' 2026-03-10T12:03:55.026 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:55.071 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:55.365 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-10T12:03:55.557 DEBUG:teuthology.orchestra.run.vm09:mon.vm09> sudo journalctl -f -n 0 -u ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm09.service 2026-03-10T12:03:55.558 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:03:55.558 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:03:55.756 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:55.794 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:56.081 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:03:56.081 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:03:56.081 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:03:56.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:55 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1078789207' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T12:03:56.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:55 vm00 ceph-mon[49203]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T12:03:56.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:55 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm00:192.168.123.100=vm00;vm09:192.168.123.109=vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: Saving service mon spec with placement vm00:192.168.123.100=vm00;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: Deploying daemon alertmanager.vm00 on vm00 2026-03-10T12:03:57.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:56 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3588158719' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:03:57.263 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:03:57.263 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:03:57.446 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:57.483 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:57.767 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:03:57.767 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:03:57.767 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:03:58.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:03:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1008369580' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:03:58.942 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:03:58.942 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:03:59.111 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:59.148 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:03:59.425 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:03:59.425 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:03:59.425 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1794809338' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T12:04:00.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:00 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:00.598 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:00.598 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:00.770 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:00.806 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:01.075 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:01.076 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:01.076 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:01.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:01 vm00 ceph-mon[49203]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T12:04:01.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:01 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T12:04:01.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:01 vm00 ceph-mon[49203]: Deploying daemon grafana.vm00 on vm00 2026-03-10T12:04:02.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:02 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/214431209' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:02.227 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:02.228 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:02.409 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:02.450 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:02.732 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:02.732 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:02.732 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:03.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:03 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/152589144' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:03.910 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:03.910 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:04.079 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:04.115 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:04.388 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:04.389 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:04.389 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:05.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:05 vm00 ceph-mon[49203]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:05.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:05 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:05.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:05 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/398388241' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:05.558 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:05.558 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:05.720 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:05.756 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:06.023 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:06.024 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:06.024 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:06.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:06 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3155923530' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:07.220 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:07.220 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:07.391 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:07.430 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:07.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:07 vm00 ceph-mon[49203]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:07.703 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:07.703 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:07.703 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:08.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:08 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/855101250' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:08.878 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:08.878 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:09.044 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:09.081 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:09.360 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:09.360 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:09.360 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:09.411 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:09 vm00 ceph-mon[49203]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3593350863' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.311 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:10 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:10.608 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:10.608 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:10.789 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:10.824 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:11.116 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:11.116 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:11.116 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:11.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:11 vm00 ceph-mon[49203]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:11.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:11 vm00 ceph-mon[49203]: Deploying daemon prometheus.vm00 on vm00 2026-03-10T12:04:11.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:11 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/2878269327' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:12.294 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:12.294 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:12.461 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:12.500 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:12.759 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:12.759 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:12.759 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:13.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:13 vm00 ceph-mon[49203]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:13.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:13 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/98764728' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:13.929 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:13.930 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:14.100 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:14.140 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:14.428 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:14.428 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:14.428 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:15.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:15 vm00 ceph-mon[49203]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:15.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:15 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:15.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:15 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1677556616' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:15.602 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:15.602 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:15.763 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:15.803 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:16.082 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:16.082 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:16.082 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:16.169 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:16 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/4189358620' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:17.250 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:17.250 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:17.409 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:17.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:17 vm00 ceph-mon[49203]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:17.484 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:17.860 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:17.860 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:17.860 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:18.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:18 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:18.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:18 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:18.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:18 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:18.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:18 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T12:04:18.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:18 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/317466555' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:19.009 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:19.009 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:19.169 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:19.198 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:19.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:19 vm00 ceph-mon[49203]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T12:04:19.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:19 vm00 ceph-mon[49203]: mgrmap e13: vm00.pahkwb(active, since 34s) 2026-03-10T12:04:19.459 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:19.459 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:19.459 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:20.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:20 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1438133962' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:20.638 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:20.638 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:20.812 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:20.850 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:21.138 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:21.138 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:21.138 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:21.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:21 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1038007978' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:22.590 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:22.590 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:22.757 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:22.828 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:22.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: Active manager daemon vm00.pahkwb restarted 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: Activating manager daemon vm00.pahkwb 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: mgrmap e14: vm00.pahkwb(active, starting, since 0.00596354s) 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm00.pahkwb", "id": "vm00.pahkwb"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: Manager daemon vm00.pahkwb is now available 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/trash_purge_schedule"}]: dispatch 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:22.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:23.284 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:23.285 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:23.285 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: mgrmap e15: vm00.pahkwb(active, since 1.01296s) 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: [10/Mar/2026:12:04:22] ENGINE Bus STARTING 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: [10/Mar/2026:12:04:22] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: [10/Mar/2026:12:04:23] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: [10/Mar/2026:12:04:23] ENGINE Bus STARTED 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: [10/Mar/2026:12:04:23] ENGINE Client ('192.168.123.100', 49606) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:23.635 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:23 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/34822713' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:24.492 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:24.492 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:24.667 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:24.711 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T12:04:25.028 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:25.028 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:25.028 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:25.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:24 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:25.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:24 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:25.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:24 vm00 ceph-mon[49203]: mgrmap e16: vm00.pahkwb(active, since 2s) 2026-03-10T12:04:25.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:24 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:25.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:24 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:25.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:24 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:04:26.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3905949130' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: Updating vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: Updating vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:04:26.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:25 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:26.200 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:26.200 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:26.509 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:26.947 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:26.947 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:26.947 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:27.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:26 vm00 ceph-mon[49203]: Updating vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.client.admin.keyring 2026-03-10T12:04:27.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:26 vm00 ceph-mon[49203]: Updating vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.client.admin.keyring 2026-03-10T12:04:27.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:26 vm00 ceph-mon[49203]: Deploying daemon ceph-exporter.vm09 on vm09 2026-03-10T12:04:28.108 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:28.108 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: Deploying daemon crash.vm09 on vm09 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/743736039' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:27 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:28.335 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:28.632 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:28.632 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:28.632 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:29.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:28 vm00 ceph-mon[49203]: Deploying daemon node-exporter.vm09 on vm09 2026-03-10T12:04:29.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:28 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3173040080' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:29.798 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:29.798 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:29.967 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:30.285 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:30.285 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:30.285 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:30.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:30 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/2014115990' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:31.444 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:31.444 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:31.684 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.xttkce", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.xttkce", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: Deploying daemon mgr.vm09.xttkce on vm09 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:31.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:04:31.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:31.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:31 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:32.125 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:32.125 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:03:14.428878Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T12:04:32.125 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-10T12:04:32.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:32 vm00 ceph-mon[49203]: Deploying daemon mon.vm09 on vm09 2026-03-10T12:04:32.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:32 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3474344912' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.064 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T12:04:33.299 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T12:04:33.299 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mon dump -f json 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e5 e5: 0 total, 0 up, 0 in 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e5 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).osd e5 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/317466555' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14162 192.168.123.100:0/2203788286' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mgrmap e13: vm00.pahkwb(active, since 34s) 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/1438133962' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/1038007978' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Active manager daemon vm00.pahkwb restarted 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Activating manager daemon vm00.pahkwb 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mgrmap e14: vm00.pahkwb(active, starting, since 0.00596354s) 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm00.pahkwb", "id": "vm00.pahkwb"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T12:04:33.329 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Manager daemon vm00.pahkwb is now available 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/mirror_snapshot_schedule"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm00.pahkwb/trash_purge_schedule"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mgrmap e15: vm00.pahkwb(active, since 1.01296s) 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: [10/Mar/2026:12:04:22] ENGINE Bus STARTING 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: [10/Mar/2026:12:04:22] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: [10/Mar/2026:12:04:23] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: [10/Mar/2026:12:04:23] ENGINE Bus STARTED 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: [10/Mar/2026:12:04:23] ENGINE Client ('192.168.123.100', 49606) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/34822713' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mgrmap e16: vm00.pahkwb(active, since 2s) 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/3905949130' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.client.admin.keyring 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Updating vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.client.admin.keyring 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Deploying daemon ceph-exporter.vm09 on vm09 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Deploying daemon crash.vm09 on vm09 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/743736039' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.330 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Deploying daemon node-exporter.vm09 on vm09 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/3173040080' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/2014115990' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.xttkce", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm09.xttkce", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Deploying daemon mgr.vm09.xttkce on vm09 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: Deploying daemon mon.vm09 on vm09 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/3474344912' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:33.331 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:33 vm09 ceph-mon[57971]: mon.vm09@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-10T12:04:33.505 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm09/config 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: mon.vm00 calling monitor election 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.xttkce/crt"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: mon.vm09 calling monitor election 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: mon.vm00 is new leader, mons vm00,vm09 in quorum (ranks 0,1) 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: monmap epoch 2 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: fsid fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: last_changed 2026-03-10T12:04:33.071952+0000 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: created 2026-03-10T12:03:14.428878+0000 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: min_mon_release 19 (squid) 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: election_strategy: 1 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.vm00 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: fsmap 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: mgrmap e16: vm00.pahkwb(active, since 16s) 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: overall HEALTH_OK 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: Standby manager daemon vm09.xttkce started 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.xttkce/key"}]: dispatch 2026-03-10T12:04:38.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:38 vm00 ceph-mon[49203]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: mon.vm00 calling monitor election 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.xttkce/crt"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: mon.vm09 calling monitor election 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:04:38.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: mon.vm00 is new leader, mons vm00,vm09 in quorum (ranks 0,1) 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: monmap epoch 2 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: fsid fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: last_changed 2026-03-10T12:04:33.071952+0000 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: created 2026-03-10T12:03:14.428878+0000 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: min_mon_release 19 (squid) 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: election_strategy: 1 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.vm00 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.vm09 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: fsmap 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: mgrmap e16: vm00.pahkwb(active, since 16s) 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: overall HEALTH_OK 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: Standby manager daemon vm09.xttkce started 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm09.xttkce/key"}]: dispatch 2026-03-10T12:04:38.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:38 vm09 ceph-mon[57971]: from='mgr.? 192.168.123.109:0/3121874546' entity='mgr.vm09.xttkce' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: mgrmap e17: vm00.pahkwb(active, since 16s), standbys: vm09.xttkce 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm09.xttkce", "id": "vm09.xttkce"}]: dispatch 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:04:39.536 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: mgrmap e17: vm00.pahkwb(active, since 16s), standbys: vm09.xttkce 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr metadata", "who": "vm09.xttkce", "id": "vm09.xttkce"}]: dispatch 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:04:39.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:04:39.813 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-10T12:04:39.813 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","modified":"2026-03-10T12:04:33.071952Z","created":"2026-03-10T12:03:14.428878Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm00","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm09","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T12:04:39.813 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-10T12:04:39.964 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T12:04:39.964 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph config generate-minimal-conf 2026-03-10T12:04:40.189 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:40.467 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:04:40.467 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-10T12:04:40.467 INFO:teuthology.orchestra.run.vm00.stdout: fsid = fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:04:40.467 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: Updating vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: Updating vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm00.pahkwb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:40.479 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1580089500' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:40.480 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.480 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.480 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:04:40.480 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:40.627 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T12:04:40.627 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:04:40.627 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T12:04:40.659 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:04:40.659 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: Updating vm00:/etc/ceph/ceph.conf 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: Updating vm09:/etc/ceph/ceph.conf 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: Updating vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: Updating vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/config/ceph.conf 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm00.pahkwb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:40.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/1580089500' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T12:04:40.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:40.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm00", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:04:40.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:40.724 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:04:40.724 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T12:04:40.752 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:04:40.752 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T12:04:40.819 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T12:04:40.819 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:04:40.819 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T12:04:40.841 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:04:40.841 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-10T12:04:40.897 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-10T12:04:40.897 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-10T12:04:40.897 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-10T12:04:40.897 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-10T12:04:40.897 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-10T12:04:40.897 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T12:04:40.897 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T12:04:40.897 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 12:03:49.920953950 +0000 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 12:02:13.977197767 +0000 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 12:02:13.977197767 +0000 2026-03-10T12:04:40.955 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 11:58:48.250000000 +0000 2026-03-10T12:04:40.956 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T12:04:41.032 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T12:04:41.032 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T12:04:41.032 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000236562 s, 2.2 MB/s 2026-03-10T12:04:41.033 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T12:04:41.096 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 12:03:49.952953944 +0000 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 12:02:13.985197770 +0000 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 12:02:13.985197770 +0000 2026-03-10T12:04:41.160 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 11:58:48.253000000 +0000 2026-03-10T12:04:41.160 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T12:04:41.249 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T12:04:41.249 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T12:04:41.249 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000143508 s, 3.6 MB/s 2026-03-10T12:04:41.250 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T12:04:41.269 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-10T12:04:41.325 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-10T12:04:41.325 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:41.325 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 223 Links: 1 Device type: fc,30 2026-03-10T12:04:41.325 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:41.325 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:41.326 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 12:03:49.977953939 +0000 2026-03-10T12:04:41.326 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 12:02:13.965197763 +0000 2026-03-10T12:04:41.326 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 12:02:13.965197763 +0000 2026-03-10T12:04:41.326 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 11:58:48.256000000 +0000 2026-03-10T12:04:41.326 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T12:04:41.389 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T12:04:41.390 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T12:04:41.390 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000132528 s, 3.9 MB/s 2026-03-10T12:04:41.391 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: Reconfiguring mon.vm00 (unknown last config time)... 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: Reconfiguring daemon mon.vm00 on vm00 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: Reconfiguring mgr.vm00.pahkwb (unknown last config time)... 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: Reconfiguring daemon mgr.vm00.pahkwb on vm00 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: Reconfiguring ceph-exporter.vm00 (monmap changed)... 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: Reconfiguring daemon ceph-exporter.vm00 on vm00 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3022441699' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.446 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.458 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 12:03:50.002953935 +0000 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 12:02:13.948197757 +0000 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 12:02:13.948197757 +0000 2026-03-10T12:04:41.536 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-10 11:58:48.365000000 +0000 2026-03-10T12:04:41.536 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T12:04:41.600 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T12:04:41.601 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T12:04:41.601 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000195956 s, 2.6 MB/s 2026-03-10T12:04:41.601 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T12:04:41.663 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:04:41.663 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T12:04:41.679 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:04:41.680 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: Reconfiguring mon.vm00 (unknown last config time)... 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: Reconfiguring daemon mon.vm00 on vm00 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: Reconfiguring mgr.vm00.pahkwb (unknown last config time)... 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: Reconfiguring daemon mgr.vm00.pahkwb on vm00 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: Reconfiguring ceph-exporter.vm00 (monmap changed)... 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: Reconfiguring daemon ceph-exporter.vm00 on vm00 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3022441699' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm00", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:41.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:41.719 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-10T12:04:41.720 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-10T12:04:41.720 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-10T12:04:41.720 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-10T12:04:41.720 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-10T12:04:41.720 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T12:04:41.720 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T12:04:41.720 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:04:24.597292136 +0000 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:02:15.978452123 +0000 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:02:15.978452123 +0000 2026-03-10T12:04:41.779 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 11:59:12.252000000 +0000 2026-03-10T12:04:41.779 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T12:04:41.842 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:04:41.843 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:04:41.843 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000214909 s, 2.4 MB/s 2026-03-10T12:04:41.843 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T12:04:41.902 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-10T12:04:41.961 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-10T12:04:41.961 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:41.961 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T12:04:41.961 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:41.962 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:41.962 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:04:24.635292180 +0000 2026-03-10T12:04:41.962 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:02:15.984452131 +0000 2026-03-10T12:04:41.962 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:02:15.984452131 +0000 2026-03-10T12:04:41.962 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 11:59:12.259000000 +0000 2026-03-10T12:04:41.962 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T12:04:42.027 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:04:42.027 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:04:42.027 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000181327 s, 2.8 MB/s 2026-03-10T12:04:42.028 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T12:04:42.088 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:04:24.677292228 +0000 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:02:15.991452139 +0000 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:02:15.991452139 +0000 2026-03-10T12:04:42.146 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 11:59:12.267000000 +0000 2026-03-10T12:04:42.146 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T12:04:42.211 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:04:42.211 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:04:42.211 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000158225 s, 3.2 MB/s 2026-03-10T12:04:42.212 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T12:04:42.269 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-10T12:04:42.293 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: Reconfiguring crash.vm00 (monmap changed)... 2026-03-10T12:04:42.326 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-10T12:04:42.326 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T12:04:42.326 INFO:teuthology.orchestra.run.vm09.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T12:04:42.326 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T12:04:42.327 INFO:teuthology.orchestra.run.vm09.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T12:04:42.327 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-10 12:04:24.720292277 +0000 2026-03-10T12:04:42.327 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-10 12:02:15.965452107 +0000 2026-03-10T12:04:42.327 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-10 12:02:15.965452107 +0000 2026-03-10T12:04:42.327 INFO:teuthology.orchestra.run.vm09.stdout: Birth: 2026-03-10 11:59:12.292000000 +0000 2026-03-10T12:04:42.327 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T12:04:42.391 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-10T12:04:42.391 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-10T12:04:42.391 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000124532 s, 4.1 MB/s 2026-03-10T12:04:42.392 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T12:04:42.451 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch apply osd --all-available-devices 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: Reconfiguring crash.vm00 (monmap changed)... 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: Reconfiguring daemon crash.vm00 on vm00 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: Reconfiguring alertmanager.vm00 (dependencies changed)... 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: Reconfiguring daemon alertmanager.vm00 on vm00 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T12:04:42.514 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:42 vm09 ceph-mon[57971]: Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: Reconfiguring daemon crash.vm00 on vm00 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: Reconfiguring alertmanager.vm00 (dependencies changed)... 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: Reconfiguring daemon alertmanager.vm00 on vm00 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T12:04:42.593 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:42 vm00 ceph-mon[49203]: Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T12:04:42.664 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm09/config 2026-03-10T12:04:42.902 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled osd.all-available-devices update... 2026-03-10T12:04:43.075 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T12:04:43.076 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:43.291 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:43.616 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:43.805 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: Marking host: vm00 for OSDSpec preview refresh. 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: Saving service osd.all-available-devices spec with placement * 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:44.097 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:43 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/4071023863' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:44.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:04:44.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: Marking host: vm00 for OSDSpec preview refresh. 2026-03-10T12:04:44.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: Marking host: vm09 for OSDSpec preview refresh. 2026-03-10T12:04:44.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: Saving service osd.all-available-devices spec with placement * 2026-03-10T12:04:44.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:44.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:44.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:44.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:43 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/4071023863' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:44.806 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:45.006 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:45.189 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: Reconfiguring prometheus.vm00 (dependencies changed)... 2026-03-10T12:04:45.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:45.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: Reconfiguring daemon prometheus.vm00 on vm00 2026-03-10T12:04:45.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:45.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:45.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:04:45.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: Reconfiguring prometheus.vm00 (dependencies changed)... 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: Reconfiguring daemon prometheus.vm00 on vm00 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm09", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T12:04:45.195 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:45.248 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:45.412 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T12:04:46.412 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:46.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T12:04:46.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T12:04:46.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T12:04:46.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: Reconfiguring daemon crash.vm09 on vm09 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3097512320' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.xttkce", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:04:46.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:46 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:46.588 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: Reconfiguring ceph-exporter.vm09 (monmap changed)... 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: Reconfiguring daemon ceph-exporter.vm09 on vm09 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: Reconfiguring crash.vm09 (monmap changed)... 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm09", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: Reconfiguring daemon crash.vm09 on vm09 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3097512320' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm09.xttkce", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T12:04:46.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:46 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:46.968 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:47.138 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T12:04:47.246 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: Reconfiguring mgr.vm09.xttkce (monmap changed)... 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: Reconfiguring daemon mgr.vm09.xttkce on vm09 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: Reconfiguring daemon mon.vm09 on vm09 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:04:47.247 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:47 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3659857446' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:47.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: Reconfiguring mgr.vm09.xttkce (monmap changed)... 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: Reconfiguring daemon mgr.vm09.xttkce on vm09 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: Reconfiguring mon.vm09 (monmap changed)... 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: Reconfiguring daemon mon.vm09 on vm09 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:04:47.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:47 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3659857446' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:48.139 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:48.379 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:48.413 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:04:48.413 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T12:04:48.413 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:04:48.413 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:04:48.414 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:48 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm00.local:9093"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm00.local:9095"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:04:48.522 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:04:48.523 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:48.523 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T12:04:48.523 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:48 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:04:48.671 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:48.847 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T12:04:49.410 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:49 vm09 ceph-mon[57971]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:49.411 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:49 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2152226611' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:49.428 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:49 vm00 ceph-mon[49203]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:49.428 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:49 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2152226611' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:49.848 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:50.019 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:50.273 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:50.399 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/117557855' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "40897eaa-78ac-4741-b44d-ec21972e0d27"}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "40897eaa-78ac-4741-b44d-ec21972e0d27"}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "40897eaa-78ac-4741-b44d-ec21972e0d27"}]': finished 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3759012857' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "88bbebec-7cfd-4a7c-9f21-3ccc91880970"}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3759012857' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "88bbebec-7cfd-4a7c-9f21-3ccc91880970"}]': finished 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1225742592' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:50.400 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:50 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1897430580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:50.428 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773144289,"num_remapped_pgs":0} 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/117557855' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "40897eaa-78ac-4741-b44d-ec21972e0d27"}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "40897eaa-78ac-4741-b44d-ec21972e0d27"}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "40897eaa-78ac-4741-b44d-ec21972e0d27"}]': finished 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3759012857' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "88bbebec-7cfd-4a7c-9f21-3ccc91880970"}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3759012857' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "88bbebec-7cfd-4a7c-9f21-3ccc91880970"}]': finished 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/1225742592' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:50.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:50 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1897430580' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:51.428 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:51.593 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:51.620 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:51 vm00 ceph-mon[49203]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:51.620 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:51 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3526071119' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:51.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:51 vm09 ceph-mon[57971]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:51.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:51 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3526071119' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:51.839 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:52.004 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773144289,"num_remapped_pgs":0} 2026-03-10T12:04:52.261 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:52 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:04:52.261 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:52 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/496357661' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:52.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:52 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:04:52.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:52 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/496357661' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:53.005 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:53.201 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/4022935629' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7b0eaf5c-1add-4a84-baac-a554cd60d945"}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7b0eaf5c-1add-4a84-baac-a554cd60d945"}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7b0eaf5c-1add-4a84-baac-a554cd60d945"}]': finished 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: osdmap e8: 3 total, 0 up, 3 in 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/145617694' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "33aec49c-1711-45a3-b430-9f3ac4f03137"}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "33aec49c-1711-45a3-b430-9f3ac4f03137"}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "33aec49c-1711-45a3-b430-9f3ac4f03137"}]': finished 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: osdmap e9: 4 total, 0 up, 4 in 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:04:53.265 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:53 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/916028453' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:53.468 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/4022935629' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7b0eaf5c-1add-4a84-baac-a554cd60d945"}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7b0eaf5c-1add-4a84-baac-a554cd60d945"}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7b0eaf5c-1add-4a84-baac-a554cd60d945"}]': finished 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: osdmap e8: 3 total, 0 up, 3 in 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/145617694' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "33aec49c-1711-45a3-b430-9f3ac4f03137"}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "33aec49c-1711-45a3-b430-9f3ac4f03137"}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "33aec49c-1711-45a3-b430-9f3ac4f03137"}]': finished 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: osdmap e9: 4 total, 0 up, 4 in 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:04:53.501 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:53 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/916028453' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:53.640 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773144292,"num_remapped_pgs":0} 2026-03-10T12:04:54.641 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:54.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:54 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/2093798626' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:54.666 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:54 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2349518847' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:54.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:54 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/2093798626' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:54.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:54 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2349518847' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:54.803 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:55.103 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:55.268 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773144292,"num_remapped_pgs":0} 2026-03-10T12:04:55.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:55 vm00 ceph-mon[49203]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:55.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:55 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3613089463' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:55.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:55 vm09 ceph-mon[57971]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:55.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:55 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3613089463' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:56.268 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:56.395 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:56 vm09 ceph-mon[57971]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:56.497 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:56.534 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:56 vm00 ceph-mon[49203]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:56.789 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:56.967 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773144296,"num_remapped_pgs":0} 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/627188608' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "56ec73da-b28d-4ba3-8bdc-9739e6f67a11"}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "56ec73da-b28d-4ba3-8bdc-9739e6f67a11"}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "56ec73da-b28d-4ba3-8bdc-9739e6f67a11"}]': finished 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: osdmap e10: 5 total, 0 up, 5 in 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2252501462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45389b3d-e16b-4843-984c-a6a836b4db24"}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2252501462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "45389b3d-e16b-4843-984c-a6a836b4db24"}]': finished 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: osdmap e11: 6 total, 0 up, 6 in 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1678902443' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/1075128088' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:57.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:57 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1017005233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:57.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/627188608' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "56ec73da-b28d-4ba3-8bdc-9739e6f67a11"}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "56ec73da-b28d-4ba3-8bdc-9739e6f67a11"}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "56ec73da-b28d-4ba3-8bdc-9739e6f67a11"}]': finished 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: osdmap e10: 5 total, 0 up, 5 in 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2252501462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "45389b3d-e16b-4843-984c-a6a836b4db24"}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2252501462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "45389b3d-e16b-4843-984c-a6a836b4db24"}]': finished 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: osdmap e11: 6 total, 0 up, 6 in 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1678902443' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/1075128088' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:57.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:57 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1017005233' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:04:57.968 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:58.141 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:58.393 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:04:58.523 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:58 vm00 ceph-mon[49203]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:58.547 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773144296,"num_remapped_pgs":0} 2026-03-10T12:04:58.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:58 vm09 ceph-mon[57971]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:04:59.548 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:04:59.568 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:04:59 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3706762305' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:59.573 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:04:59 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3706762305' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:04:59.718 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:04:59.945 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:00.139 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773144296,"num_remapped_pgs":0} 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/4187648683' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/2854527436' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "160d3a70-57e0-495b-b733-f429c14cf529"}]: dispatch 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "160d3a70-57e0-495b-b733-f429c14cf529"}]: dispatch 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "160d3a70-57e0-495b-b733-f429c14cf529"}]': finished 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: osdmap e12: 7 total, 0 up, 7 in 2026-03-10T12:05:00.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:00.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:00.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:00.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:00.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:00.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:00.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:00 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/4187648683' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/2854527436' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "160d3a70-57e0-495b-b733-f429c14cf529"}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "160d3a70-57e0-495b-b733-f429c14cf529"}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "160d3a70-57e0-495b-b733-f429c14cf529"}]': finished 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: osdmap e12: 7 total, 0 up, 7 in 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:00.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:00 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:01.140 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2645802473' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7250f127-b26a-4296-9151-d3eea0fab5ef"}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2645802473' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7250f127-b26a-4296-9151-d3eea0fab5ef"}]': finished 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: osdmap e13: 8 total, 0 up, 8 in 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/2067676321' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:05:01.302 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:01 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1876043244' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:05:01.320 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:01.548 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:01.702 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2645802473' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7250f127-b26a-4296-9151-d3eea0fab5ef"}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2645802473' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7250f127-b26a-4296-9151-d3eea0fab5ef"}]': finished 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: osdmap e13: 8 total, 0 up, 8 in 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/2067676321' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:05:01.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:01 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1876043244' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T12:05:02.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:02 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1094779360' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:02.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:02 vm00 ceph-mon[49203]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:02.702 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:02.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:02 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1094779360' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:02.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:02 vm09 ceph-mon[57971]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:02.870 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:03.104 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:03.250 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:03.603 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:03 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/923312336' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:03.688 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:03 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/923312336' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:04.250 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:04.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:04 vm09 ceph-mon[57971]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:04.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:04 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T12:05:04.421 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:04 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:04.422 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:04 vm09 ceph-mon[57971]: Deploying daemon osd.0 on vm09 2026-03-10T12:05:04.456 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:04.486 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:04 vm00 ceph-mon[49203]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:04.486 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:04 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T12:05:04.486 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:04 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:04.486 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:04 vm00 ceph-mon[49203]: Deploying daemon osd.0 on vm09 2026-03-10T12:05:04.754 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:04.911 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:05.524 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:05 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1073307145' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:05.524 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:05 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T12:05:05.524 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:05 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:05.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:05 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1073307145' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:05.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:05 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T12:05:05.706 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:05 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:05.913 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:06.099 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:06.371 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:06 vm00 ceph-mon[49203]: Deploying daemon osd.1 on vm00 2026-03-10T12:05:06.371 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:06 vm00 ceph-mon[49203]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:06.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:06 vm09 ceph-mon[57971]: Deploying daemon osd.1 on vm00 2026-03-10T12:05:06.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:06 vm09 ceph-mon[57971]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:06.642 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:06.844 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: Deploying daemon osd.3 on vm09 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:05:07.551 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:07 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/567163335' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: Deploying daemon osd.3 on vm09 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:05:07.773 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:07 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/567163335' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:07.845 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:08.036 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:08.348 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:08.508 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:08.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: Deploying daemon osd.2 on vm00 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='osd.0 [v2:192.168.123.109:6800/3195322373,v1:192.168.123.109:6801/3195322373]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:05:08.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:08 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3378287536' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:08.954 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: Deploying daemon osd.2 on vm00 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='osd.0 [v2:192.168.123.109:6800/3195322373,v1:192.168.123.109:6801/3195322373]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T12:05:08.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:08 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3378287536' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:09.509 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:09.800 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:10.084 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: osdmap e14: 8 total, 0 up, 8 in 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='osd.0 [v2:192.168.123.109:6800/3195322373,v1:192.168.123.109:6801/3195322373]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: Deploying daemon osd.4 on vm09 2026-03-10T12:05:10.085 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:09 vm09 ceph-mon[57971]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T12:05:10.087 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: osdmap e14: 8 total, 0 up, 8 in 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='osd.0 [v2:192.168.123.109:6800/3195322373,v1:192.168.123.109:6801/3195322373]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: Deploying daemon osd.4 on vm09 2026-03-10T12:05:10.088 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:09 vm00 ceph-mon[49203]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T12:05:10.628 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":15,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:11.381 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:11.381 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:11.381 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T12:05:11.381 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: osdmap e15: 8 total, 0 up, 8 in 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/627026067' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='osd.3 [v2:192.168.123.109:6808/3766553571,v1:192.168.123.109:6809/3766553571]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:05:11.382 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:11 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: osdmap e15: 8 total, 0 up, 8 in 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/627026067' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='osd.3 [v2:192.168.123.109:6808/3766553571,v1:192.168.123.109:6809/3766553571]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T12:05:11.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:11 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:11.629 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:12.025 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: osdmap e16: 8 total, 0 up, 8 in 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='osd.3 [v2:192.168.123.109:6808/3766553571,v1:192.168.123.109:6809/3766553571]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:12.190 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:12 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: osdmap e16: 8 total, 0 up, 8 in 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='osd.3 [v2:192.168.123.109:6808/3766553571,v1:192.168.123.109:6809/3766553571]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:12.365 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T12:05:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:12.366 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:12 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:12.389 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:12.666 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":17,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: Deploying daemon osd.5 on vm00 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: Deploying daemon osd.6 on vm09 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: osdmap e17: 8 total, 0 up, 8 in 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='osd.0 ' entity='osd.0' 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3838921881' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:13.206 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:13 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: Deploying daemon osd.5 on vm00 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: Deploying daemon osd.6 on vm09 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: osdmap e17: 8 total, 0 up, 8 in 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:13.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='osd.0 ' entity='osd.0' 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3838921881' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056]' entity='osd.1' 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:13.440 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:13 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:13.668 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:13.881 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:14.136 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: osd.3 [v2:192.168.123.109:6808/3766553571,v1:192.168.123.109:6809/3766553571] boot 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: osd.0 [v2:192.168.123.109:6800/3195322373,v1:192.168.123.109:6801/3195322373] boot 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056] boot 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: osdmap e18: 8 total, 3 up, 8 in 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.4 [v2:192.168.123.109:6816/2955964499,v1:192.168.123.109:6817/2955964499]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: osdmap e19: 8 total, 3 up, 8 in 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.4 [v2:192.168.123.109:6816/2955964499,v1:192.168.123.109:6817/2955964499]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:14.279 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:14 vm00 ceph-mon[49203]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:14.316 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":19,"num_osds":8,"num_up_osds":3,"osd_up_since":1773144313,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: osd.3 [v2:192.168.123.109:6808/3766553571,v1:192.168.123.109:6809/3766553571] boot 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: osd.0 [v2:192.168.123.109:6800/3195322373,v1:192.168.123.109:6801/3195322373] boot 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: osd.1 [v2:192.168.123.100:6802/4271055056,v1:192.168.123.100:6803/4271055056] boot 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: osdmap e18: 8 total, 3 up, 8 in 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:14.370 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.4 [v2:192.168.123.109:6816/2955964499,v1:192.168.123.109:6817/2955964499]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: osdmap e19: 8 total, 3 up, 8 in 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.4 [v2:192.168.123.109:6816/2955964499,v1:192.168.123.109:6817/2955964499]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:14.371 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:14 vm09 ceph-mon[57971]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:15.261 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/424311658' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: osdmap e20: 8 total, 3 up, 8 in 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:15.262 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:15.316 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/424311658' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: osdmap e20: 8 total, 3 up, 8 in 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:15.352 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:15.559 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:15.618 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 sudo[73821]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T12:05:15.618 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 sudo[73821]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T12:05:15.618 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 sudo[73821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T12:05:15.618 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:15 vm00 sudo[73821]: pam_unix(sudo:session): session closed for user root 2026-03-10T12:05:15.706 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 sudo[68307]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T12:05:15.706 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 sudo[68307]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T12:05:15.706 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 sudo[68307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T12:05:15.706 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:15 vm09 sudo[68307]: pam_unix(sudo:session): session closed for user root 2026-03-10T12:05:16.120 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:16.330 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":20,"num_osds":8,"num_up_osds":3,"osd_up_since":1773144313,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: Deploying daemon osd.7 on vm00 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:05:16.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3854151332' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110] boot 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: osdmap e21: 8 total, 4 up, 8 in 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:16.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:16 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: Deploying daemon osd.7 on vm00 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110]' entity='osd.2' 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm00"}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "mon metadata", "id": "vm09"}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3854151332' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: osd.2 [v2:192.168.123.100:6810/3192633110,v1:192.168.123.100:6811/3192633110] boot 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: osdmap e21: 8 total, 4 up, 8 in 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:16.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:16 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:17.330 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: mgrmap e18: vm00.pahkwb(active, since 54s), standbys: vm09.xttkce 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.4 ' entity='osd.4' 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.6 [v2:192.168.123.109:6824/3948129958,v1:192.168.123.109:6825/3948129958]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: osd.4 [v2:192.168.123.109:6816/2955964499,v1:192.168.123.109:6817/2955964499] boot 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: osdmap e22: 8 total, 5 up, 8 in 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.6 [v2:192.168.123.109:6824/3948129958,v1:192.168.123.109:6825/3948129958]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:17.350 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:17 vm00 ceph-mon[49203]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: mgrmap e18: vm00.pahkwb(active, since 54s), standbys: vm09.xttkce 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.4 ' entity='osd.4' 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.6 [v2:192.168.123.109:6824/3948129958,v1:192.168.123.109:6825/3948129958]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: osd.4 [v2:192.168.123.109:6816/2955964499,v1:192.168.123.109:6817/2955964499] boot 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: osdmap e22: 8 total, 5 up, 8 in 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.6 [v2:192.168.123.109:6824/3948129958,v1:192.168.123.109:6825/3948129958]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:17.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:17 vm09 ceph-mon[57971]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-10T12:05:17.548 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:17.819 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:17.988 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":22,"num_osds":8,"num_up_osds":5,"osd_up_since":1773144317,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:18.438 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: pgmap v38: 1 pgs: 1 unknown; 0 B data, 905 MiB used, 79 GiB / 80 GiB avail 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1487630621' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: osdmap e23: 8 total, 5 up, 8 in 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:18.439 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:18 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:18.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:18.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: pgmap v38: 1 pgs: 1 unknown; 0 B data, 905 MiB used, 79 GiB / 80 GiB avail 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1487630621' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: osdmap e23: 8 total, 5 up, 8 in 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:18.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:18 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:18.988 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:19.219 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:19.558 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:19.765 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":24,"num_osds":8,"num_up_osds":7,"osd_up_since":1773144319,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='osd.6 ' entity='osd.6' 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016] boot 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: osd.6 [v2:192.168.123.109:6824/3948129958,v1:192.168.123.109:6825/3948129958] boot 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: osdmap e24: 8 total, 7 up, 8 in 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3157279849' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.892 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:19 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='osd.6 ' entity='osd.6' 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: osd.5 [v2:192.168.123.100:6818/3438343016,v1:192.168.123.100:6819/3438343016] boot 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: osd.6 [v2:192.168.123.109:6824/3948129958,v1:192.168.123.109:6825/3948129958] boot 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: osdmap e24: 8 total, 7 up, 8 in 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3157279849' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T12:05:19.981 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:19.982 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:19 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:20.767 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:20.996 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:21.278 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 933 MiB used, 99 GiB / 100 GiB avail 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: osdmap e25: 8 total, 7 up, 8 in 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.412 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:21 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:05:21.447 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":26,"num_osds":8,"num_up_osds":7,"osd_up_since":1773144319,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 933 MiB used, 99 GiB / 100 GiB avail 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: osdmap e25: 8 total, 7 up, 8 in 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:21.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:21 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:05:22.448 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd stat -f json 2026-03-10T12:05:22.454 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: Detected new or changed devices on vm09 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: osdmap e26: 8 total, 7 up, 8 in 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/466963989' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176] boot 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: osdmap e27: 8 total, 8 up, 8 in 2026-03-10T12:05:22.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:22 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: Detected new or changed devices on vm09 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: osdmap e26: 8 total, 7 up, 8 in 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/466963989' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: osd.7 [v2:192.168.123.100:6826/3272132176,v1:192.168.123.100:6827/3272132176] boot 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: osdmap e27: 8 total, 8 up, 8 in 2026-03-10T12:05:22.481 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:22 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T12:05:22.628 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:22.854 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:23.022 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":28,"num_osds":8,"num_up_osds":8,"osd_up_since":1773144322,"num_in_osds":8,"osd_in_since":1773144300,"num_remapped_pgs":0} 2026-03-10T12:05:23.023 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd dump --format=json 2026-03-10T12:05:23.240 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:23.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:23 vm00 ceph-mon[49203]: purged_snaps scrub starts 2026-03-10T12:05:23.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:23 vm00 ceph-mon[49203]: purged_snaps scrub ok 2026-03-10T12:05:23.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:23 vm00 ceph-mon[49203]: Detected new or changed devices on vm00 2026-03-10T12:05:23.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:23 vm00 ceph-mon[49203]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 986 MiB used, 139 GiB / 140 GiB avail 2026-03-10T12:05:23.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:23 vm00 ceph-mon[49203]: osdmap e28: 8 total, 8 up, 8 in 2026-03-10T12:05:23.264 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:23 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2003491925' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:23.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:23 vm09 ceph-mon[57971]: purged_snaps scrub starts 2026-03-10T12:05:23.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:23 vm09 ceph-mon[57971]: purged_snaps scrub ok 2026-03-10T12:05:23.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:23 vm09 ceph-mon[57971]: Detected new or changed devices on vm00 2026-03-10T12:05:23.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:23 vm09 ceph-mon[57971]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 986 MiB used, 139 GiB / 140 GiB avail 2026-03-10T12:05:23.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:23 vm09 ceph-mon[57971]: osdmap e28: 8 total, 8 up, 8 in 2026-03-10T12:05:23.455 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:23 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2003491925' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T12:05:23.473 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:23.473 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":29,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","created":"2026-03-10T12:03:15.796811+0000","modified":"2026-03-10T12:05:23.460912+0000","last_up_change":"2026-03-10T12:05:22.175298+0000","last_in_change":"2026-03-10T12:05:00.481345+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":13,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T12:05:13.617104+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"40897eaa-78ac-4741-b44d-ec21972e0d27","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6801","nonce":3195322373}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6803","nonce":3195322373}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6807","nonce":3195322373}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6805","nonce":3195322373}]},"public_addr":"192.168.123.109:6801/3195322373","cluster_addr":"192.168.123.109:6803/3195322373","heartbeat_back_addr":"192.168.123.109:6807/3195322373","heartbeat_front_addr":"192.168.123.109:6805/3195322373","state":["exists","up"]},{"osd":1,"uuid":"88bbebec-7cfd-4a7c-9f21-3ccc91880970","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6803","nonce":4271055056}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6805","nonce":4271055056}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6809","nonce":4271055056}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6807","nonce":4271055056}]},"public_addr":"192.168.123.100:6803/4271055056","cluster_addr":"192.168.123.100:6805/4271055056","heartbeat_back_addr":"192.168.123.100:6809/4271055056","heartbeat_front_addr":"192.168.123.100:6807/4271055056","state":["exists","up"]},{"osd":2,"uuid":"7b0eaf5c-1add-4a84-baac-a554cd60d945","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6811","nonce":3192633110}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6813","nonce":3192633110}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6817","nonce":3192633110}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6815","nonce":3192633110}]},"public_addr":"192.168.123.100:6811/3192633110","cluster_addr":"192.168.123.100:6813/3192633110","heartbeat_back_addr":"192.168.123.100:6817/3192633110","heartbeat_front_addr":"192.168.123.100:6815/3192633110","state":["exists","up"]},{"osd":3,"uuid":"33aec49c-1711-45a3-b430-9f3ac4f03137","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6809","nonce":3766553571}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6811","nonce":3766553571}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6815","nonce":3766553571}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6813","nonce":3766553571}]},"public_addr":"192.168.123.109:6809/3766553571","cluster_addr":"192.168.123.109:6811/3766553571","heartbeat_back_addr":"192.168.123.109:6815/3766553571","heartbeat_front_addr":"192.168.123.109:6813/3766553571","state":["exists","up"]},{"osd":4,"uuid":"56ec73da-b28d-4ba3-8bdc-9739e6f67a11","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6817","nonce":2955964499}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6819","nonce":2955964499}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6823","nonce":2955964499}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6821","nonce":2955964499}]},"public_addr":"192.168.123.109:6817/2955964499","cluster_addr":"192.168.123.109:6819/2955964499","heartbeat_back_addr":"192.168.123.109:6823/2955964499","heartbeat_front_addr":"192.168.123.109:6821/2955964499","state":["exists","up"]},{"osd":5,"uuid":"45389b3d-e16b-4843-984c-a6a836b4db24","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6819","nonce":3438343016}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6821","nonce":3438343016}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6825","nonce":3438343016}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6823","nonce":3438343016}]},"public_addr":"192.168.123.100:6819/3438343016","cluster_addr":"192.168.123.100:6821/3438343016","heartbeat_back_addr":"192.168.123.100:6825/3438343016","heartbeat_front_addr":"192.168.123.100:6823/3438343016","state":["exists","up"]},{"osd":6,"uuid":"160d3a70-57e0-495b-b733-f429c14cf529","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6825","nonce":3948129958}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6827","nonce":3948129958}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6831","nonce":3948129958}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6829","nonce":3948129958}]},"public_addr":"192.168.123.109:6825/3948129958","cluster_addr":"192.168.123.109:6827/3948129958","heartbeat_back_addr":"192.168.123.109:6831/3948129958","heartbeat_front_addr":"192.168.123.109:6829/3948129958","state":["exists","up"]},{"osd":7,"uuid":"7250f127-b26a-4296-9151-d3eea0fab5ef","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6827","nonce":3272132176}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6829","nonce":3272132176}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6833","nonce":3272132176}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6831","nonce":3272132176}]},"public_addr":"192.168.123.100:6827/3272132176","cluster_addr":"192.168.123.100:6829/3272132176","heartbeat_back_addr":"192.168.123.100:6833/3272132176","heartbeat_front_addr":"192.168.123.100:6831/3272132176","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:09.155238+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:10.450587+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:13.322699+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:11.566144+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:14.280835+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:17.523806+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:17.394617+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:20.584419+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/1347968711":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/4196842340":"2026-03-11T12:03:28.133223+0000","192.168.123.100:0/4143634087":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6801/2257139492":"2026-03-11T12:04:21.568389+0000","192.168.123.100:6800/3459973349":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6800/1412054096":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/3697570236":"2026-03-11T12:03:44.003752+0000","192.168.123.100:6801/3459973349":"2026-03-11T12:03:28.133223+0000","192.168.123.100:0/4148821491":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/2342595775":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/3733822469":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6801/1412054096":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/3447713749":"2026-03-11T12:03:44.003752+0000","192.168.123.100:6800/2257139492":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/2019583035":"2026-03-11T12:04:21.568389+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T12:05:23.642 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T12:05:13.617104+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T12:05:23.643 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd pool get .mgr pg_num 2026-03-10T12:05:23.814 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:24.046 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-10T12:05:24.193 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T12:05:24.193 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T12:05:24.356 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:24.624 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-10T12:05:24.624 INFO:teuthology.orchestra.run.vm00.stdout: key = AQAECbBpxxL9JBAAMeJvwBmZkSj3Hil6M08oWA== 2026-03-10T12:05:24.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:24 vm00 ceph-mon[49203]: osdmap e29: 8 total, 8 up, 8 in 2026-03-10T12:05:24.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:24 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2936342549' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:05:24.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:24 vm00 ceph-mon[49203]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:24.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:24 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1442552648' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T12:05:24.794 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T12:05:24.794 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T12:05:24.794 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T12:05:24.827 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T12:05:24.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:24 vm09 ceph-mon[57971]: osdmap e29: 8 total, 8 up, 8 in 2026-03-10T12:05:24.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:24 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2936342549' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:05:24.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:24 vm09 ceph-mon[57971]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:24.853 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:24 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1442552648' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T12:05:24.999 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm09/config 2026-03-10T12:05:25.273 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-10T12:05:25.273 INFO:teuthology.orchestra.run.vm09.stdout: key = AQAFCbBpYagcEBAAB3mxJBqX4poCkfB1GCG44Q== 2026-03-10T12:05:25.606 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-10T12:05:25.607 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T12:05:25.607 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T12:05:25.680 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T12:05:25.680 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T12:05:25.680 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mgr dump --format=json 2026-03-10T12:05:25.846 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:25.868 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:25 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3126218958' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:05:25.868 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:25 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3126218958' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:05:25.868 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:25 vm00 ceph-mon[49203]: from='client.? 192.168.123.109:0/3079601392' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:05:25.868 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:25 vm00 ceph-mon[49203]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:05:25.868 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:25 vm00 ceph-mon[49203]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:05:25.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:25 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3126218958' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:05:25.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:25 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3126218958' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:05:25.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:25 vm09 ceph-mon[57971]: from='client.? 192.168.123.109:0/3079601392' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:05:25.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:25 vm09 ceph-mon[57971]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T12:05:25.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:25 vm09 ceph-mon[57971]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T12:05:26.132 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:26.284 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":18,"flags":0,"active_gid":14223,"active_name":"vm00.pahkwb","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":4116942521},{"type":"v1","addr":"192.168.123.100:6801","nonce":4116942521}]},"active_addr":"192.168.123.100:6801/4116942521","active_change":"2026-03-10T12:04:21.568698+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14246,"name":"vm09.xttkce","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/","prometheus":"http://192.168.123.100:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1561266395}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":889694678}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3201736096}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2370137867}]}]} 2026-03-10T12:05:26.286 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T12:05:26.286 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T12:05:26.286 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd dump --format=json 2026-03-10T12:05:26.453 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:26.693 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:26.693 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":29,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","created":"2026-03-10T12:03:15.796811+0000","modified":"2026-03-10T12:05:23.460912+0000","last_up_change":"2026-03-10T12:05:22.175298+0000","last_in_change":"2026-03-10T12:05:00.481345+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":13,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T12:05:13.617104+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"40897eaa-78ac-4741-b44d-ec21972e0d27","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6801","nonce":3195322373}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6803","nonce":3195322373}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6807","nonce":3195322373}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6805","nonce":3195322373}]},"public_addr":"192.168.123.109:6801/3195322373","cluster_addr":"192.168.123.109:6803/3195322373","heartbeat_back_addr":"192.168.123.109:6807/3195322373","heartbeat_front_addr":"192.168.123.109:6805/3195322373","state":["exists","up"]},{"osd":1,"uuid":"88bbebec-7cfd-4a7c-9f21-3ccc91880970","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6803","nonce":4271055056}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6805","nonce":4271055056}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6809","nonce":4271055056}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6807","nonce":4271055056}]},"public_addr":"192.168.123.100:6803/4271055056","cluster_addr":"192.168.123.100:6805/4271055056","heartbeat_back_addr":"192.168.123.100:6809/4271055056","heartbeat_front_addr":"192.168.123.100:6807/4271055056","state":["exists","up"]},{"osd":2,"uuid":"7b0eaf5c-1add-4a84-baac-a554cd60d945","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6811","nonce":3192633110}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6813","nonce":3192633110}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6817","nonce":3192633110}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6815","nonce":3192633110}]},"public_addr":"192.168.123.100:6811/3192633110","cluster_addr":"192.168.123.100:6813/3192633110","heartbeat_back_addr":"192.168.123.100:6817/3192633110","heartbeat_front_addr":"192.168.123.100:6815/3192633110","state":["exists","up"]},{"osd":3,"uuid":"33aec49c-1711-45a3-b430-9f3ac4f03137","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6809","nonce":3766553571}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6811","nonce":3766553571}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6815","nonce":3766553571}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6813","nonce":3766553571}]},"public_addr":"192.168.123.109:6809/3766553571","cluster_addr":"192.168.123.109:6811/3766553571","heartbeat_back_addr":"192.168.123.109:6815/3766553571","heartbeat_front_addr":"192.168.123.109:6813/3766553571","state":["exists","up"]},{"osd":4,"uuid":"56ec73da-b28d-4ba3-8bdc-9739e6f67a11","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6817","nonce":2955964499}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6819","nonce":2955964499}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6823","nonce":2955964499}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6821","nonce":2955964499}]},"public_addr":"192.168.123.109:6817/2955964499","cluster_addr":"192.168.123.109:6819/2955964499","heartbeat_back_addr":"192.168.123.109:6823/2955964499","heartbeat_front_addr":"192.168.123.109:6821/2955964499","state":["exists","up"]},{"osd":5,"uuid":"45389b3d-e16b-4843-984c-a6a836b4db24","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6819","nonce":3438343016}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6821","nonce":3438343016}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6825","nonce":3438343016}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6823","nonce":3438343016}]},"public_addr":"192.168.123.100:6819/3438343016","cluster_addr":"192.168.123.100:6821/3438343016","heartbeat_back_addr":"192.168.123.100:6825/3438343016","heartbeat_front_addr":"192.168.123.100:6823/3438343016","state":["exists","up"]},{"osd":6,"uuid":"160d3a70-57e0-495b-b733-f429c14cf529","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6825","nonce":3948129958}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6827","nonce":3948129958}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6831","nonce":3948129958}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6829","nonce":3948129958}]},"public_addr":"192.168.123.109:6825/3948129958","cluster_addr":"192.168.123.109:6827/3948129958","heartbeat_back_addr":"192.168.123.109:6831/3948129958","heartbeat_front_addr":"192.168.123.109:6829/3948129958","state":["exists","up"]},{"osd":7,"uuid":"7250f127-b26a-4296-9151-d3eea0fab5ef","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6827","nonce":3272132176}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6829","nonce":3272132176}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6833","nonce":3272132176}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6831","nonce":3272132176}]},"public_addr":"192.168.123.100:6827/3272132176","cluster_addr":"192.168.123.100:6829/3272132176","heartbeat_back_addr":"192.168.123.100:6833/3272132176","heartbeat_front_addr":"192.168.123.100:6831/3272132176","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:09.155238+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:10.450587+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:13.322699+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:11.566144+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:14.280835+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:17.523806+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:17.394617+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:20.584419+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/1347968711":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/4196842340":"2026-03-11T12:03:28.133223+0000","192.168.123.100:0/4143634087":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6801/2257139492":"2026-03-11T12:04:21.568389+0000","192.168.123.100:6800/3459973349":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6800/1412054096":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/3697570236":"2026-03-11T12:03:44.003752+0000","192.168.123.100:6801/3459973349":"2026-03-11T12:03:28.133223+0000","192.168.123.100:0/4148821491":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/2342595775":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/3733822469":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6801/1412054096":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/3447713749":"2026-03-11T12:03:44.003752+0000","192.168.123.100:6800/2257139492":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/2019583035":"2026-03-11T12:04:21.568389+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T12:05:26.837 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:26 vm00 ceph-mon[49203]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:26.837 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:26 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1343604373' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T12:05:26.864 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T12:05:26.864 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd dump --format=json 2026-03-10T12:05:26.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:26 vm09 ceph-mon[57971]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:26.921 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:26 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1343604373' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T12:05:27.031 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:27.262 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:27.262 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":29,"fsid":"fba12862-1c78-11f1-b92d-892b8c98a56b","created":"2026-03-10T12:03:15.796811+0000","modified":"2026-03-10T12:05:23.460912+0000","last_up_change":"2026-03-10T12:05:22.175298+0000","last_in_change":"2026-03-10T12:05:00.481345+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":13,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T12:05:13.617104+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"40897eaa-78ac-4741-b44d-ec21972e0d27","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":27,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6801","nonce":3195322373}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6803","nonce":3195322373}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6807","nonce":3195322373}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":3195322373},{"type":"v1","addr":"192.168.123.109:6805","nonce":3195322373}]},"public_addr":"192.168.123.109:6801/3195322373","cluster_addr":"192.168.123.109:6803/3195322373","heartbeat_back_addr":"192.168.123.109:6807/3195322373","heartbeat_front_addr":"192.168.123.109:6805/3195322373","state":["exists","up"]},{"osd":1,"uuid":"88bbebec-7cfd-4a7c-9f21-3ccc91880970","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6803","nonce":4271055056}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6805","nonce":4271055056}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6809","nonce":4271055056}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":4271055056},{"type":"v1","addr":"192.168.123.100:6807","nonce":4271055056}]},"public_addr":"192.168.123.100:6803/4271055056","cluster_addr":"192.168.123.100:6805/4271055056","heartbeat_back_addr":"192.168.123.100:6809/4271055056","heartbeat_front_addr":"192.168.123.100:6807/4271055056","state":["exists","up"]},{"osd":2,"uuid":"7b0eaf5c-1add-4a84-baac-a554cd60d945","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6811","nonce":3192633110}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6813","nonce":3192633110}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6817","nonce":3192633110}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":3192633110},{"type":"v1","addr":"192.168.123.100:6815","nonce":3192633110}]},"public_addr":"192.168.123.100:6811/3192633110","cluster_addr":"192.168.123.100:6813/3192633110","heartbeat_back_addr":"192.168.123.100:6817/3192633110","heartbeat_front_addr":"192.168.123.100:6815/3192633110","state":["exists","up"]},{"osd":3,"uuid":"33aec49c-1711-45a3-b430-9f3ac4f03137","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6808","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6809","nonce":3766553571}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6810","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6811","nonce":3766553571}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6814","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6815","nonce":3766553571}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6812","nonce":3766553571},{"type":"v1","addr":"192.168.123.109:6813","nonce":3766553571}]},"public_addr":"192.168.123.109:6809/3766553571","cluster_addr":"192.168.123.109:6811/3766553571","heartbeat_back_addr":"192.168.123.109:6815/3766553571","heartbeat_front_addr":"192.168.123.109:6813/3766553571","state":["exists","up"]},{"osd":4,"uuid":"56ec73da-b28d-4ba3-8bdc-9739e6f67a11","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6816","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6817","nonce":2955964499}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6818","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6819","nonce":2955964499}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6822","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6823","nonce":2955964499}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6820","nonce":2955964499},{"type":"v1","addr":"192.168.123.109:6821","nonce":2955964499}]},"public_addr":"192.168.123.109:6817/2955964499","cluster_addr":"192.168.123.109:6819/2955964499","heartbeat_back_addr":"192.168.123.109:6823/2955964499","heartbeat_front_addr":"192.168.123.109:6821/2955964499","state":["exists","up"]},{"osd":5,"uuid":"45389b3d-e16b-4843-984c-a6a836b4db24","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6819","nonce":3438343016}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6821","nonce":3438343016}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6825","nonce":3438343016}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":3438343016},{"type":"v1","addr":"192.168.123.100:6823","nonce":3438343016}]},"public_addr":"192.168.123.100:6819/3438343016","cluster_addr":"192.168.123.100:6821/3438343016","heartbeat_back_addr":"192.168.123.100:6825/3438343016","heartbeat_front_addr":"192.168.123.100:6823/3438343016","state":["exists","up"]},{"osd":6,"uuid":"160d3a70-57e0-495b-b733-f429c14cf529","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6824","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6825","nonce":3948129958}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6826","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6827","nonce":3948129958}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6830","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6831","nonce":3948129958}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6828","nonce":3948129958},{"type":"v1","addr":"192.168.123.109:6829","nonce":3948129958}]},"public_addr":"192.168.123.109:6825/3948129958","cluster_addr":"192.168.123.109:6827/3948129958","heartbeat_back_addr":"192.168.123.109:6831/3948129958","heartbeat_front_addr":"192.168.123.109:6829/3948129958","state":["exists","up"]},{"osd":7,"uuid":"7250f127-b26a-4296-9151-d3eea0fab5ef","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":27,"up_thru":28,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6826","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6827","nonce":3272132176}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6828","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6829","nonce":3272132176}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6832","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6833","nonce":3272132176}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6830","nonce":3272132176},{"type":"v1","addr":"192.168.123.100:6831","nonce":3272132176}]},"public_addr":"192.168.123.100:6827/3272132176","cluster_addr":"192.168.123.100:6829/3272132176","heartbeat_back_addr":"192.168.123.100:6833/3272132176","heartbeat_front_addr":"192.168.123.100:6831/3272132176","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:09.155238+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:10.450587+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:13.322699+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:11.566144+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:14.280835+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:17.523806+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:17.394617+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T12:05:20.584419+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/1347968711":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/4196842340":"2026-03-11T12:03:28.133223+0000","192.168.123.100:0/4143634087":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6801/2257139492":"2026-03-11T12:04:21.568389+0000","192.168.123.100:6800/3459973349":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6800/1412054096":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/3697570236":"2026-03-11T12:03:44.003752+0000","192.168.123.100:6801/3459973349":"2026-03-11T12:03:28.133223+0000","192.168.123.100:0/4148821491":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/2342595775":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/3733822469":"2026-03-11T12:03:28.133223+0000","192.168.123.100:6801/1412054096":"2026-03-11T12:03:44.003752+0000","192.168.123.100:0/3447713749":"2026-03-11T12:03:44.003752+0000","192.168.123.100:6800/2257139492":"2026-03-11T12:04:21.568389+0000","192.168.123.100:0/2019583035":"2026-03-11T12:04:21.568389+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T12:05:27.408 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.0 flush_pg_stats 2026-03-10T12:05:27.408 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.1 flush_pg_stats 2026-03-10T12:05:27.408 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.2 flush_pg_stats 2026-03-10T12:05:27.408 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.3 flush_pg_stats 2026-03-10T12:05:27.408 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.4 flush_pg_stats 2026-03-10T12:05:27.409 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.5 flush_pg_stats 2026-03-10T12:05:27.409 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.6 flush_pg_stats 2026-03-10T12:05:27.409 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph tell osd.7 flush_pg_stats 2026-03-10T12:05:27.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:27 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2286371508' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:05:27.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:27 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/23025154' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:05:27.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:27 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2286371508' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:05:27.955 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:27 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/23025154' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T12:05:28.093 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.094 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.130 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.163 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.170 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.207 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.208 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.315 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:28.703 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:28 vm00 ceph-mon[49203]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:28.779 INFO:teuthology.orchestra.run.vm00.stdout:115964116995 2026-03-10T12:05:28.780 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.7 2026-03-10T12:05:28.860 INFO:teuthology.orchestra.run.vm00.stdout:77309411332 2026-03-10T12:05:28.860 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.3 2026-03-10T12:05:28.947 INFO:teuthology.orchestra.run.vm00.stdout:94489280516 2026-03-10T12:05:28.947 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.4 2026-03-10T12:05:28.954 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:28 vm09 ceph-mon[57971]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 214 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:29.103 INFO:teuthology.orchestra.run.vm00.stdout:77309411332 2026-03-10T12:05:29.104 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.1 2026-03-10T12:05:29.265 INFO:teuthology.orchestra.run.vm00.stdout:103079215108 2026-03-10T12:05:29.265 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.5 2026-03-10T12:05:29.278 INFO:teuthology.orchestra.run.vm00.stdout:103079215107 2026-03-10T12:05:29.278 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.6 2026-03-10T12:05:29.280 INFO:teuthology.orchestra.run.vm00.stdout:77309411333 2026-03-10T12:05:29.280 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.0 2026-03-10T12:05:29.282 INFO:teuthology.orchestra.run.vm00.stdout:90194313220 2026-03-10T12:05:29.282 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph osd last-stat-seq osd.2 2026-03-10T12:05:29.393 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:29.634 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:29.784 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:29.808 INFO:teuthology.orchestra.run.vm00.stdout:115964116995 2026-03-10T12:05:30.048 INFO:tasks.cephadm.ceph_manager.ceph:need seq 115964116995 got 115964116995 for osd.7 2026-03-10T12:05:30.048 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:30.139 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:30.226 INFO:teuthology.orchestra.run.vm00.stdout:77309411333 2026-03-10T12:05:30.316 INFO:teuthology.orchestra.run.vm00.stdout:94489280516 2026-03-10T12:05:30.629 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:30.744 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411332 got 77309411333 for osd.3 2026-03-10T12:05:30.744 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:30.766 INFO:tasks.cephadm.ceph_manager.ceph:need seq 94489280516 got 94489280516 for osd.4 2026-03-10T12:05:30.766 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:30.829 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:30.963 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:30 vm00 ceph-mon[49203]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:30.964 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:30 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2356154789' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T12:05:30.964 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:30 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/605003545' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T12:05:30.964 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:30 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3026685362' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T12:05:30.974 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:31.060 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:31.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:30 vm09 ceph-mon[57971]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:31.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:30 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2356154789' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T12:05:31.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:30 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/605003545' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T12:05:31.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:30 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3026685362' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T12:05:31.248 INFO:teuthology.orchestra.run.vm00.stdout:103079215108 2026-03-10T12:05:31.278 INFO:teuthology.orchestra.run.vm00.stdout:77309411333 2026-03-10T12:05:31.326 INFO:teuthology.orchestra.run.vm00.stdout:103079215107 2026-03-10T12:05:31.504 INFO:tasks.cephadm.ceph_manager.ceph:need seq 103079215108 got 103079215108 for osd.5 2026-03-10T12:05:31.504 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:31.510 INFO:teuthology.orchestra.run.vm00.stdout:90194313220 2026-03-10T12:05:31.523 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411332 got 77309411333 for osd.1 2026-03-10T12:05:31.523 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:31.533 INFO:tasks.cephadm.ceph_manager.ceph:need seq 103079215107 got 103079215107 for osd.6 2026-03-10T12:05:31.533 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:31.534 INFO:teuthology.orchestra.run.vm00.stdout:77309411333 2026-03-10T12:05:31.675 INFO:tasks.cephadm.ceph_manager.ceph:need seq 90194313220 got 90194313220 for osd.2 2026-03-10T12:05:31.675 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:31.716 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411333 got 77309411333 for osd.0 2026-03-10T12:05:31.716 DEBUG:teuthology.parallel:result is None 2026-03-10T12:05:31.716 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T12:05:31.716 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph pg dump --format=json 2026-03-10T12:05:31.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:31 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2234515574' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T12:05:31.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:31 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3796393080' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T12:05:31.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:31 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2015134137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T12:05:31.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:31 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/3506477584' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T12:05:31.779 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:31 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/2200285138' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T12:05:31.919 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:32.130 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:32.131 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T12:05:32.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:31 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2234515574' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T12:05:32.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:31 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3796393080' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T12:05:32.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:31 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2015134137' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T12:05:32.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:31 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/3506477584' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T12:05:32.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:31 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/2200285138' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T12:05:32.280 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":52,"stamp":"2026-03-10T12:05:31.585848+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218304,"kb_used_data":3564,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521088,"statfs":{"total":171765137408,"available":171541594112,"internally_reserved":0,"allocated":3649536,"data_stored":2250488,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"9.124421"},"pg_stats":[{"pgid":"1.0","version":"20'32","reported_seq":17,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-10T12:05:23.777115+0000","last_change":"2026-03-10T12:05:23.776036+0000","last_active":"2026-03-10T12:05:23.777115+0000","last_peered":"2026-03-10T12:05:23.777115+0000","last_clean":"2026-03-10T12:05:23.777115+0000","last_became_active":"2026-03-10T12:05:23.468999+0000","last_became_peered":"2026-03-10T12:05:23.468999+0000","last_unstale":"2026-03-10T12:05:23.777115+0000","last_undegraded":"2026-03-10T12:05:23.777115+0000","last_fullsized":"2026-03-10T12:05:23.777115+0000","mapping_epoch":28,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":29,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T12:05:14.122259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T12:05:14.122259+0000","last_clean_scrub_stamp":"2026-03-10T12:05:14.122259+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:53:00.601232+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,2],"acting":[7,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1851392,"data_stored":1837120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":7,"up_from":27,"seq":115964116995,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27564,"kb_used_data":728,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939860,"statfs":{"total":21470642176,"available":21442416640,"internally_reserved":0,"allocated":745472,"data_stored":568361,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":24,"seq":103079215108,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27116,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940308,"statfs":{"total":21470642176,"available":21442875392,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":24,"seq":103079215109,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27124,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940300,"statfs":{"total":21470642176,"available":21442867200,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":22,"seq":94489280516,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27120,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940304,"statfs":{"total":21470642176,"available":21442871296,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":21,"seq":90194313220,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27572,"kb_used_data":728,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939852,"statfs":{"total":21470642176,"available":21442408448,"internally_reserved":0,"allocated":745472,"data_stored":568361,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":18,"seq":77309411333,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27124,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940300,"statfs":{"total":21470642176,"available":21442867200,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":18,"seq":77309411333,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27116,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940308,"statfs":{"total":21470642176,"available":21442875392,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":18,"seq":77309411333,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27568,"kb_used_data":728,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939856,"statfs":{"total":21470642176,"available":21442412544,"internally_reserved":0,"allocated":745472,"data_stored":568361,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T12:05:32.280 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph pg dump --format=json 2026-03-10T12:05:32.486 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:32.720 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:32.720 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T12:05:32.863 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:32 vm00 ceph-mon[49203]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:32.863 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:32 vm00 ceph-mon[49203]: from='client.14536 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:32.888 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":52,"stamp":"2026-03-10T12:05:31.585848+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":218304,"kb_used_data":3564,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167521088,"statfs":{"total":171765137408,"available":171541594112,"internally_reserved":0,"allocated":3649536,"data_stored":2250488,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"9.124421"},"pg_stats":[{"pgid":"1.0","version":"20'32","reported_seq":17,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-10T12:05:23.777115+0000","last_change":"2026-03-10T12:05:23.776036+0000","last_active":"2026-03-10T12:05:23.777115+0000","last_peered":"2026-03-10T12:05:23.777115+0000","last_clean":"2026-03-10T12:05:23.777115+0000","last_became_active":"2026-03-10T12:05:23.468999+0000","last_became_peered":"2026-03-10T12:05:23.468999+0000","last_unstale":"2026-03-10T12:05:23.777115+0000","last_undegraded":"2026-03-10T12:05:23.777115+0000","last_fullsized":"2026-03-10T12:05:23.777115+0000","mapping_epoch":28,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":29,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T12:05:14.122259+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T12:05:14.122259+0000","last_clean_scrub_stamp":"2026-03-10T12:05:14.122259+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:53:00.601232+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,2],"acting":[7,0,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1851392,"data_stored":1837120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":4}],"osd_stats":[{"osd":7,"up_from":27,"seq":115964116995,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27564,"kb_used_data":728,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939860,"statfs":{"total":21470642176,"available":21442416640,"internally_reserved":0,"allocated":745472,"data_stored":568361,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":24,"seq":103079215108,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27116,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940308,"statfs":{"total":21470642176,"available":21442875392,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":24,"seq":103079215109,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27124,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940300,"statfs":{"total":21470642176,"available":21442867200,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":22,"seq":94489280516,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27120,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940304,"statfs":{"total":21470642176,"available":21442871296,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":21,"seq":90194313220,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27572,"kb_used_data":728,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939852,"statfs":{"total":21470642176,"available":21442408448,"internally_reserved":0,"allocated":745472,"data_stored":568361,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":18,"seq":77309411333,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27124,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940300,"statfs":{"total":21470642176,"available":21442867200,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":18,"seq":77309411333,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27116,"kb_used_data":276,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940308,"statfs":{"total":21470642176,"available":21442875392,"internally_reserved":0,"allocated":282624,"data_stored":109081,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":18,"seq":77309411333,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27568,"kb_used_data":728,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939856,"statfs":{"total":21470642176,"available":21442412544,"internally_reserved":0,"allocated":745472,"data_stored":568361,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T12:05:32.888 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T12:05:32.888 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T12:05:32.888 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T12:05:32.888 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph health --format=json 2026-03-10T12:05:33.050 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:33.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:32 vm09 ceph-mon[57971]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:33.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:32 vm09 ceph-mon[57971]: from='client.14536 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:33.294 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:33.294 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T12:05:33.501 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T12:05:33.501 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T12:05:33.501 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T12:05:33.503 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T12:05:33.503 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch status' 2026-03-10T12:05:33.679 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:33.921 INFO:teuthology.orchestra.run.vm00.stdout:Backend: cephadm 2026-03-10T12:05:33.921 INFO:teuthology.orchestra.run.vm00.stdout:Available: Yes 2026-03-10T12:05:33.921 INFO:teuthology.orchestra.run.vm00.stdout:Paused: No 2026-03-10T12:05:34.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:33 vm00 ceph-mon[49203]: from='client.14540 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:34.045 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:33 vm00 ceph-mon[49203]: from='client.? 192.168.123.100:0/1325797042' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T12:05:34.071 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch ps' 2026-03-10T12:05:34.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:33 vm09 ceph-mon[57971]: from='client.14540 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:34.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:33 vm09 ceph-mon[57971]: from='client.? 192.168.123.100:0/1325797042' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T12:05:34.244 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.vm00 vm00 *:9093,9094 running (52s) 14s ago 94s 24.7M - 0.25.0 c8568f914cd2 6c933b3d7b6f 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm00 vm00 *:9926 running (103s) 14s ago 103s 8619k - 19.2.3-678-ge911bdeb 654f31e6858e 0a22b544da07 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm09 vm09 *:9926 running (67s) 14s ago 67s 6647k - 19.2.3-678-ge911bdeb 654f31e6858e 1c78b25a669d 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm00 vm00 running (102s) 14s ago 101s 7646k - 19.2.3-678-ge911bdeb 654f31e6858e 53ec20e28bb5 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm09 vm09 running (66s) 14s ago 66s 7658k - 19.2.3-678-ge911bdeb 654f31e6858e 94fad2af4135 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:grafana.vm00 vm00 *:3000 running (51s) 14s ago 84s 75.4M - 10.4.0 c8b91775d855 0ee2b5de7e50 2026-03-10T12:05:34.475 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm00.pahkwb vm00 *:9283,8765,8443 running (2m) 14s ago 2m 547M - 19.2.3-678-ge911bdeb 654f31e6858e 8c724d477e41 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm09.xttkce vm09 *:8443,9283,8765 running (63s) 14s ago 62s 489M - 19.2.3-678-ge911bdeb 654f31e6858e f3b168f7d183 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm00 vm00 running (2m) 14s ago 2m 49.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 50ae03124fd8 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm09 vm09 running (61s) 14s ago 61s 40.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 12f6138c46ff 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm00 vm00 *:9100 running (98s) 14s ago 98s 10.4M - 1.7.0 72c9c2088986 70dee9bdffbd 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm09 vm09 *:9100 running (63s) 14s ago 63s 9068k - 1.7.0 72c9c2088986 4f4372a946fd 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm09 running (28s) 14s ago 27s 55.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e3e3aca47fe5 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (26s) 14s ago 26s 63.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e467c0366d40 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (23s) 14s ago 23s 56.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cfab7e02b0b4 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm09 running (25s) 14s ago 25s 29.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e ff3c4031b5c2 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm09 running (22s) 14s ago 22s 54.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 48ca102e9168 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm00 running (20s) 14s ago 19s 29.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cbcc81e76381 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm09 running (19s) 14s ago 19s 39.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6f740903dffd 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm00 running (17s) 14s ago 17s 15.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d81524966d8a 2026-03-10T12:05:34.476 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.vm00 vm00 *:9095 running (49s) 14s ago 77s 36.2M - 2.51.0 1d3b7f56885b a71086c85881 2026-03-10T12:05:34.645 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch ls' 2026-03-10T12:05:34.822 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:34.874 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:34 vm00 ceph-mon[49203]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:34.874 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:34 vm00 ceph-mon[49203]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 14s ago 116s count:1 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter ?:9926 2/2 15s ago 118s * 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:crash 2/2 15s ago 119s * 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 14s ago 117s count:1 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 15s ago 119s count:2 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:mon 2/2 15s ago 99s vm00:192.168.123.100=vm00;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 2/2 15s ago 116s * 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:osd.all-available-devices 8 15s ago 52s * 2026-03-10T12:05:35.054 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 14s ago 118s count:1 2026-03-10T12:05:35.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:34 vm09 ceph-mon[57971]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:35.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:34 vm09 ceph-mon[57971]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:35.228 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch host ls' 2026-03-10T12:05:35.403 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:35.643 INFO:teuthology.orchestra.run.vm00.stdout:HOST ADDR LABELS STATUS 2026-03-10T12:05:35.644 INFO:teuthology.orchestra.run.vm00.stdout:vm00 192.168.123.100 2026-03-10T12:05:35.644 INFO:teuthology.orchestra.run.vm00.stdout:vm09 192.168.123.109 2026-03-10T12:05:35.644 INFO:teuthology.orchestra.run.vm00.stdout:2 hosts in cluster 2026-03-10T12:05:35.737 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:35 vm00 ceph-mon[49203]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:35.737 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:35 vm00 ceph-mon[49203]: from='client.24337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:35.810 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch device ls' 2026-03-10T12:05:35.975 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:36.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:35 vm09 ceph-mon[57971]: from='client.14552 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:36.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:35 vm09 ceph-mon[57971]: from='client.24337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 14s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdb hdd DWNBRSTVMM00001 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdc hdd DWNBRSTVMM00002 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdd hdd DWNBRSTVMM00003 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vde hdd DWNBRSTVMM00004 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 15s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.220 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 15s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:36.391 INFO:teuthology.run_tasks:Running task vip.exec... 2026-03-10T12:05:36.395 INFO:tasks.vip:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T12:05:36.395 DEBUG:teuthology.orchestra.run.vm00:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'systemctl stop nfs-server' 2026-03-10T12:05:36.421 INFO:teuthology.orchestra.run.vm00.stderr:+ systemctl stop nfs-server 2026-03-10T12:05:36.428 INFO:tasks.vip:Running commands on role host.b host ubuntu@vm09.local 2026-03-10T12:05:36.428 DEBUG:teuthology.orchestra.run.vm09:> sudo TESTDIR=/home/ubuntu/cephtest bash -ex -c 'systemctl stop nfs-server' 2026-03-10T12:05:36.455 INFO:teuthology.orchestra.run.vm09.stderr:+ systemctl stop nfs-server 2026-03-10T12:05:36.462 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T12:05:36.465 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T12:05:36.465 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph nfs cluster create foo' 2026-03-10T12:05:36.663 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:36.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:36 vm00 ceph-mon[49203]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:36.938 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:36 vm00 ceph-mon[49203]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:36.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:36 vm00 ceph-mon[49203]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:36.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:36 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:05:37.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:36 vm09 ceph-mon[57971]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:37.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:36 vm09 ceph-mon[57971]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:37.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:36 vm09 ceph-mon[57971]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:37.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:36 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T12:05:38.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:37 vm00 ceph-mon[49203]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:38.188 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:37 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:05:38.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:37 vm09 ceph-mon[57971]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:38.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:37 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T12:05:38.964 INFO:teuthology.run_tasks:Running task cephadm.wait_for_service... 2026-03-10T12:05:38.966 INFO:tasks.cephadm:Waiting for ceph service nfs.foo to start (timeout 300)... 2026-03-10T12:05:38.966 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch ls -f json 2026-03-10T12:05:39.104 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:38 vm09 ceph-mon[57971]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:39.104 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:05:39.104 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:38 vm09 ceph-mon[57971]: osdmap e30: 8 total, 8 up, 8 in 2026-03-10T12:05:39.104 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:38 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T12:05:39.165 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:38 vm00 ceph-mon[49203]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:39.165 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished 2026-03-10T12:05:39.165 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:38 vm00 ceph-mon[49203]: osdmap e30: 8 total, 8 up, 8 in 2026-03-10T12:05:39.165 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:38 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch 2026-03-10T12:05:39.210 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:39.507 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:39.507 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T12:03:38.731696Z", "last_refresh": "2026-03-10T12:05:20.328816Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:26.931751Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T12:03:36.336075Z", "last_refresh": "2026-03-10T12:05:19.665768Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:27.846605Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T12:03:35.899202Z", "last_refresh": "2026-03-10T12:05:19.665839Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T12:03:37.273559Z", "last_refresh": "2026-03-10T12:05:20.328861Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:31.511059Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T12:03:35.438683Z", "last_refresh": "2026-03-10T12:05:19.665908Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:33.049009Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T12:03:55.362045Z", "last_refresh": "2026-03-10T12:05:19.665940Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:05:38.799345Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049}, "status": {"created": "2026-03-10T12:05:38.795818Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T12:04:30.630564Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T12:03:38.181454Z", "last_refresh": "2026-03-10T12:05:19.665876Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:42.902195Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T12:04:42.897372Z", "last_refresh": "2026-03-10T12:05:19.665972Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T12:04:33.051829Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T12:03:36.743830Z", "last_refresh": "2026-03-10T12:05:20.328895Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T12:05:39.675 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: osdmap e31: 8 total, 8 up, 8 in 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: Saving service nfs.foo spec with placement count:1 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.104 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:39 vm00 ceph-mon[49203]: osdmap e32: 8 total, 8 up, 8 in 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: osdmap e31: 8 total, 8 up, 8 in 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: Saving service nfs.foo spec with placement count:1 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:39 vm09 ceph-mon[57971]: osdmap e32: 8 total, 8 up, 8 in 2026-03-10T12:05:40.676 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch ls -f json 2026-03-10T12:05:40.932 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: pgmap v58: 33 pgs: 4 creating+peering, 21 unknown, 8 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: mgrmap e19: vm00.pahkwb(active, since 78s), standbys: vm09.xttkce 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T12:05:40.956 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:40 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='client.14576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: pgmap v58: 33 pgs: 4 creating+peering, 21 unknown, 8 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: mgrmap e19: vm00.pahkwb(active, since 78s), standbys: vm09.xttkce 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=foo"]}]': finished 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.foo", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.foo"}]': finished 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.foo.0.0.vm00.jzguoo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T12:05:41.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:40 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:41.215 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:41.215 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T12:03:38.731696Z", "last_refresh": "2026-03-10T12:05:40.283029Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:26.931751Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T12:03:36.336075Z", "last_refresh": "2026-03-10T12:05:39.566997Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:27.846605Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T12:03:35.899202Z", "last_refresh": "2026-03-10T12:05:39.567052Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T12:03:37.273559Z", "last_refresh": "2026-03-10T12:05:40.283056Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:31.511059Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T12:03:35.438683Z", "last_refresh": "2026-03-10T12:05:39.567122Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:33.049009Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T12:03:55.362045Z", "last_refresh": "2026-03-10T12:05:39.567153Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:05:40.511540Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049}, "status": {"created": "2026-03-10T12:05:38.795818Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T12:04:30.630564Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T12:03:38.181454Z", "last_refresh": "2026-03-10T12:05:39.567089Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:42.902195Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T12:04:42.897372Z", "last_refresh": "2026-03-10T12:05:39.567184Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T12:04:33.051829Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T12:03:36.743830Z", "last_refresh": "2026-03-10T12:05:40.283083Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T12:05:41.446 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T12:05:41.811 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: Creating key for client.nfs.foo.0.0.vm00.jzguoo 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: Rados config object exists: conf-nfs.foo 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: Creating key for client.nfs.foo.0.0.vm00.jzguoo-rgw 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: Bind address in nfs.foo.0.0.vm00.jzguoo's ganesha conf is defaulting to empty 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: Deploying daemon nfs.foo.0.0.vm00.jzguoo on vm00 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.064 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:41 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: Creating key for client.nfs.foo.0.0.vm00.jzguoo 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: Ensuring nfs.foo.0 is in the ganesha grace table 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: Rados config object exists: conf-nfs.foo 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: Creating key for client.nfs.foo.0.0.vm00.jzguoo-rgw 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: Bind address in nfs.foo.0.0.vm00.jzguoo's ganesha conf is defaulting to empty 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: Deploying daemon nfs.foo.0.0.vm00.jzguoo on vm00 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: from='client.14590 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.203 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:41 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:42.447 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch ls -f json 2026-03-10T12:05:42.698 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:42.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:42 vm00 ceph-mon[49203]: pgmap v60: 33 pgs: 4 creating+peering, 10 unknown, 19 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T12:05:42.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:42 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.939 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:42 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:42.969 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:42.969 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T12:03:38.731696Z", "last_refresh": "2026-03-10T12:05:40.283029Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:26.931751Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T12:03:36.336075Z", "last_refresh": "2026-03-10T12:05:40.282943Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:27.846605Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T12:03:35.899202Z", "last_refresh": "2026-03-10T12:05:40.282972Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T12:03:37.273559Z", "last_refresh": "2026-03-10T12:05:40.283056Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:31.511059Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T12:03:35.438683Z", "last_refresh": "2026-03-10T12:05:40.282910Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:33.049009Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T12:03:55.362045Z", "last_refresh": "2026-03-10T12:05:40.282862Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:05:41.660617Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049}, "status": {"created": "2026-03-10T12:05:38.795818Z", "ports": [2049], "running": 0, "size": 1}}, {"events": ["2026-03-10T12:04:30.630564Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T12:03:38.181454Z", "last_refresh": "2026-03-10T12:05:40.283001Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:42.902195Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T12:04:42.897372Z", "last_refresh": "2026-03-10T12:05:40.283110Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T12:04:33.051829Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T12:03:36.743830Z", "last_refresh": "2026-03-10T12:05:40.283083Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T12:05:43.125 INFO:tasks.cephadm:nfs.foo has 0/1 2026-03-10T12:05:43.204 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:42 vm09 ceph-mon[57971]: pgmap v60: 33 pgs: 4 creating+peering, 10 unknown, 19 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 0 B/s wr, 0 op/s 2026-03-10T12:05:43.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:42 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:43.205 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:42 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.126 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph orch ls -f json 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='client.14602 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.139 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:44 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:44.380 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:44.409 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='client.14602 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:44.410 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:44 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T12:05:44.668 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T12:05:44.668 INFO:teuthology.orchestra.run.vm00.stdout:[{"placement": {"count": 1}, "service_name": "alertmanager", "service_type": "alertmanager", "status": {"created": "2026-03-10T12:03:38.731696Z", "last_refresh": "2026-03-10T12:05:43.376214Z", "ports": [9093, 9094], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:26.931751Z service:ceph-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "ceph-exporter", "service_type": "ceph-exporter", "spec": {"prio_limit": 5, "stats_period": 5}, "status": {"created": "2026-03-10T12:03:36.336075Z", "last_refresh": "2026-03-10T12:05:43.376117Z", "ports": [9926], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:27.846605Z service:crash [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash", "status": {"created": "2026-03-10T12:03:35.899202Z", "last_refresh": "2026-03-10T12:05:43.376147Z", "running": 2, "size": 2}}, {"placement": {"count": 1}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}, "status": {"created": "2026-03-10T12:03:37.273559Z", "last_refresh": "2026-03-10T12:05:43.376241Z", "ports": [3000], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:31.511059Z service:mgr [INFO] \"service was created\""], "placement": {"count": 2}, "service_name": "mgr", "service_type": "mgr", "status": {"created": "2026-03-10T12:03:35.438683Z", "last_refresh": "2026-03-10T12:05:43.376083Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:33.049009Z service:mon [INFO] \"service was created\""], "placement": {"count": 2, "hosts": ["vm00:192.168.123.100=vm00", "vm09:192.168.123.109=vm09"]}, "service_name": "mon", "service_type": "mon", "status": {"created": "2026-03-10T12:03:55.362045Z", "last_refresh": "2026-03-10T12:05:43.376032Z", "running": 2, "size": 2}}, {"events": ["2026-03-10T12:05:43.405071Z service:nfs.foo [INFO] \"service was created\""], "placement": {"count": 1}, "service_id": "foo", "service_name": "nfs.foo", "service_type": "nfs", "spec": {"port": 2049}, "status": {"created": "2026-03-10T12:05:38.795818Z", "last_refresh": "2026-03-10T12:05:43.376409Z", "ports": [2049], "running": 1, "size": 1}}, {"events": ["2026-03-10T12:04:30.630564Z service:node-exporter [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter", "status": {"created": "2026-03-10T12:03:38.181454Z", "last_refresh": "2026-03-10T12:05:43.376186Z", "ports": [9100], "running": 2, "size": 2}}, {"events": ["2026-03-10T12:04:42.902195Z service:osd.all-available-devices [INFO] \"service was created\""], "placement": {"host_pattern": "*"}, "service_id": "all-available-devices", "service_name": "osd.all-available-devices", "service_type": "osd", "spec": {"data_devices": {"all": true}, "filter_logic": "AND", "objectstore": "bluestore"}, "status": {"created": "2026-03-10T12:04:42.897372Z", "last_refresh": "2026-03-10T12:05:43.376301Z", "running": 8, "size": 8}}, {"events": ["2026-03-10T12:04:33.051829Z service:prometheus [INFO] \"service was created\""], "placement": {"count": 1}, "service_name": "prometheus", "service_type": "prometheus", "status": {"created": "2026-03-10T12:03:36.743830Z", "last_refresh": "2026-03-10T12:05:43.376271Z", "ports": [9095], "running": 1, "size": 1}}] 2026-03-10T12:05:44.835 INFO:tasks.cephadm:nfs.foo has 1/1 2026-03-10T12:05:44.835 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T12:05:44.837 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T12:05:44.837 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'stat -c '"'"'%u %g'"'"' /var/log/ceph | grep '"'"'167 167'"'"'' 2026-03-10T12:05:45.087 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:45.188 INFO:teuthology.orchestra.run.vm00.stdout:167 167 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: pgmap v61: 33 pgs: 33 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:45.336 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:45 vm00 ceph-mon[49203]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.336 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch status' 2026-03-10T12:05:45.520 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: pgmap v61: 33 pgs: 33 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 1.3 KiB/s wr, 3 op/s 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T12:05:45.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:45 vm09 ceph-mon[57971]: from='mgr.14223 192.168.123.100:0/367366487' entity='mgr.vm00.pahkwb' 2026-03-10T12:05:45.861 INFO:teuthology.orchestra.run.vm00.stdout:Backend: cephadm 2026-03-10T12:05:45.861 INFO:teuthology.orchestra.run.vm00.stdout:Available: Yes 2026-03-10T12:05:45.861 INFO:teuthology.orchestra.run.vm00.stdout:Paused: No 2026-03-10T12:05:46.014 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch ps' 2026-03-10T12:05:46.185 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:46.429 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.vm00 vm00 *:9093,9094 running (64s) 1s ago 106s 24.7M - 0.25.0 c8568f914cd2 6c933b3d7b6f 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm00 vm00 *:9926 running (115s) 1s ago 115s 9072k - 19.2.3-678-ge911bdeb 654f31e6858e 0a22b544da07 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter.vm09 vm09 *:9926 running (79s) 2s ago 79s 6647k - 19.2.3-678-ge911bdeb 654f31e6858e 1c78b25a669d 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm00 vm00 running (113s) 1s ago 113s 7646k - 19.2.3-678-ge911bdeb 654f31e6858e 53ec20e28bb5 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:crash.vm09 vm09 running (78s) 2s ago 78s 7658k - 19.2.3-678-ge911bdeb 654f31e6858e 94fad2af4135 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:grafana.vm00 vm00 *:3000 running (63s) 1s ago 96s 75.4M - 10.4.0 c8b91775d855 0ee2b5de7e50 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm00.pahkwb vm00 *:9283,8765,8443 running (2m) 1s ago 2m 553M - 19.2.3-678-ge911bdeb 654f31e6858e 8c724d477e41 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:mgr.vm09.xttkce vm09 *:8443,9283,8765 running (75s) 2s ago 74s 490M - 19.2.3-678-ge911bdeb 654f31e6858e f3b168f7d183 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm00 vm00 running (2m) 1s ago 2m 52.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 50ae03124fd8 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:mon.vm09 vm09 running (73s) 2s ago 73s 44.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 12f6138c46ff 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:nfs.foo.0.0.vm00.jzguoo vm00 *:2049 running (4s) 1s ago 4s 13.5M - 5.9 654f31e6858e 8b397888e7ee 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm00 vm00 *:9100 running (110s) 1s ago 110s 10.5M - 1.7.0 72c9c2088986 70dee9bdffbd 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm09 vm09 *:9100 running (75s) 2s ago 75s 9294k - 1.7.0 72c9c2088986 4f4372a946fd 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm09 running (39s) 2s ago 39s 63.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e3e3aca47fe5 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm00 running (38s) 1s ago 38s 66.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e467c0366d40 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm00 running (35s) 1s ago 35s 65.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cfab7e02b0b4 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.3 vm09 running (37s) 2s ago 37s 40.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e ff3c4031b5c2 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.4 vm09 running (34s) 2s ago 34s 63.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 48ca102e9168 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.5 vm00 running (31s) 1s ago 31s 40.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cbcc81e76381 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.6 vm09 running (31s) 2s ago 31s 63.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6f740903dffd 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:osd.7 vm00 running (29s) 1s ago 29s 41.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d81524966d8a 2026-03-10T12:05:46.430 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.vm00 vm00 *:9095 running (61s) 1s ago 89s 36.3M - 2.51.0 1d3b7f56885b a71086c85881 2026-03-10T12:05:46.557 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:46 vm00 ceph-mon[49203]: from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:46.581 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch ls' 2026-03-10T12:05:46.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:46 vm09 ceph-mon[57971]: from='client.24367 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T12:05:46.784 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 1s ago 2m count:1 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:ceph-exporter ?:9926 2/2 2s ago 2m * 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:crash 2/2 2s ago 2m * 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 1s ago 2m count:1 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 2s ago 2m count:2 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:mon 2/2 2s ago 111s vm00:192.168.123.100=vm00;vm09:192.168.123.109=vm09;count:2 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:nfs.foo ?:2049 1/1 1s ago 8s count:1 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 2/2 2s ago 2m * 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:osd.all-available-devices 8 2s ago 64s * 2026-03-10T12:05:47.041 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 1s ago 2m count:1 2026-03-10T12:05:47.192 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch host ls' 2026-03-10T12:05:47.360 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:47.422 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:47 vm00 ceph-mon[49203]: pgmap v62: 33 pgs: 33 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s 2026-03-10T12:05:47.423 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:47 vm00 ceph-mon[49203]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:47.579 INFO:teuthology.orchestra.run.vm00.stdout:HOST ADDR LABELS STATUS 2026-03-10T12:05:47.579 INFO:teuthology.orchestra.run.vm00.stdout:vm00 192.168.123.100 2026-03-10T12:05:47.579 INFO:teuthology.orchestra.run.vm00.stdout:vm09 192.168.123.109 2026-03-10T12:05:47.579 INFO:teuthology.orchestra.run.vm00.stdout:2 hosts in cluster 2026-03-10T12:05:47.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:47 vm09 ceph-mon[57971]: pgmap v62: 33 pgs: 33 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s 2026-03-10T12:05:47.704 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:47 vm09 ceph-mon[57971]: from='client.14610 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:47.727 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch device ls' 2026-03-10T12:05:47.894 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 26s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdb hdd DWNBRSTVMM00001 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdc hdd DWNBRSTVMM00002 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vdd hdd DWNBRSTVMM00003 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm00 /dev/vde hdd DWNBRSTVMM00004 20.0G No 26s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 27s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vdb hdd DWNBRSTVMM09001 20.0G No 27s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vdc hdd DWNBRSTVMM09002 20.0G No 27s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vdd hdd DWNBRSTVMM09003 20.0G No 27s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.123 INFO:teuthology.orchestra.run.vm00.stdout:vm09 /dev/vde hdd DWNBRSTVMM09004 20.0G No 27s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T12:05:48.263 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:48 vm00 ceph-mon[49203]: from='client.14614 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:48.263 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:48 vm00 ceph-mon[49203]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:48.277 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- bash -c 'ceph orch ls | grep '"'"'^osd.all-available-devices '"'"'' 2026-03-10T12:05:48.442 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:48.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:48 vm09 ceph-mon[57971]: from='client.14614 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:48.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:48 vm09 ceph-mon[57971]: from='client.24373 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:48.706 INFO:teuthology.orchestra.run.vm00.stdout:osd.all-available-devices 8 4s ago 65s * 2026-03-10T12:05:48.871 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T12:05:48.873 INFO:tasks.cephadm:Teardown begin 2026-03-10T12:05:48.873 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:05:48.900 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:05:48.925 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T12:05:48.925 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid fba12862-1c78-11f1-b92d-892b8c98a56b -- ceph mgr module disable cephadm 2026-03-10T12:05:49.107 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/mon.vm00/config 2026-03-10T12:05:49.126 INFO:teuthology.orchestra.run.vm00.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T12:05:49.145 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T12:05:49.145 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T12:05:49.145 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T12:05:49.160 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T12:05:49.175 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T12:05:49.175 INFO:tasks.cephadm.mon.vm00:Stopping mon.vm00... 2026-03-10T12:05:49.175 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00 2026-03-10T12:05:49.390 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 systemd[1]: Stopping Ceph mon.vm00 for fba12862-1c78-11f1-b92d-892b8c98a56b... 2026-03-10T12:05:49.391 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 ceph-mon[49203]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:49.391 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 ceph-mon[49203]: pgmap v63: 33 pgs: 33 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 927 B/s wr, 2 op/s 2026-03-10T12:05:49.391 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 ceph-mon[49203]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:49.391 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm00[49181]: 2026-03-10T12:05:49.311+0000 7fec5dd41640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm00 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T12:05:49.391 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm00[49181]: 2026-03-10T12:05:49.311+0000 7fec5dd41640 -1 mon.vm00@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T12:05:49.689 INFO:journalctl@ceph.mon.vm00.vm00.stdout:Mar 10 12:05:49 vm00 podman[86018]: 2026-03-10 12:05:49.391957936 +0000 UTC m=+0.094763931 container died 50ae03124fd82e3054d5dcb50874afd67201990ad6cbfe4cab03de2481766055 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm00, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True) 2026-03-10T12:05:49.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:49 vm09 ceph-mon[57971]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:49.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:49 vm09 ceph-mon[57971]: pgmap v63: 33 pgs: 33 active+clean; 449 KiB data, 213 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 927 B/s wr, 2 op/s 2026-03-10T12:05:49.705 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:49 vm09 ceph-mon[57971]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T12:05:50.015 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm00.service' 2026-03-10T12:05:50.071 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T12:05:50.071 INFO:tasks.cephadm.mon.vm00:Stopped mon.vm00 2026-03-10T12:05:50.071 INFO:tasks.cephadm.mon.vm09:Stopping mon.vm09... 2026-03-10T12:05:50.071 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm09 2026-03-10T12:05:50.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:50 vm09 systemd[1]: Stopping Ceph mon.vm09 for fba12862-1c78-11f1-b92d-892b8c98a56b... 2026-03-10T12:05:50.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:50 vm09 ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm09[57947]: 2026-03-10T12:05:50.197+0000 7f5466d63640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm09 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T12:05:50.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:50 vm09 ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm09[57947]: 2026-03-10T12:05:50.197+0000 7f5466d63640 -1 mon.vm09@1(peon) e2 *** Got Signal Terminated *** 2026-03-10T12:05:50.273 INFO:journalctl@ceph.mon.vm09.vm09.stdout:Mar 10 12:05:50 vm09 podman[73007]: 2026-03-10 12:05:50.224464465 +0000 UTC m=+0.050080008 container died 12f6138c46fff7a30b02e4339a39b083a0ae1bd49b4108f254e94fe8959c8a25 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-fba12862-1c78-11f1-b92d-892b8c98a56b-mon-vm09, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team , FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223) 2026-03-10T12:05:50.430 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-fba12862-1c78-11f1-b92d-892b8c98a56b@mon.vm09.service' 2026-03-10T12:05:50.475 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T12:05:50.475 INFO:tasks.cephadm.mon.vm09:Stopped mon.vm09 2026-03-10T12:05:50.475 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid fba12862-1c78-11f1-b92d-892b8c98a56b --force --keep-logs 2026-03-10T12:05:50.643 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:06:29.249 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid fba12862-1c78-11f1-b92d-892b8c98a56b --force --keep-logs 2026-03-10T12:06:29.368 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:06:55.421 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:06:55.448 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T12:06:55.483 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T12:06:55.483 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020/remote/vm00/crash 2026-03-10T12:06:55.483 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/crash -- . 2026-03-10T12:06:55.514 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/crash: Cannot open: No such file or directory 2026-03-10T12:06:55.515 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-10T12:06:55.515 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020/remote/vm09/crash 2026-03-10T12:06:55.515 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/crash -- . 2026-03-10T12:06:55.556 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/crash: Cannot open: No such file or directory 2026-03-10T12:06:55.556 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-10T12:06:55.557 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T12:06:55.557 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_DAEMON_PLACE_FAIL | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T12:06:55.586 INFO:tasks.cephadm:Compressing logs... 2026-03-10T12:06:55.586 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:06:55.627 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:06:55.648 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T12:06:55.648 INFO:teuthology.orchestra.run.vm00.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T12:06:55.649 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mon.vm00.log 2026-03-10T12:06:55.650 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log 2026-03-10T12:06:55.652 INFO:teuthology.orchestra.run.vm09.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T12:06:55.652 INFO:teuthology.orchestra.run.vm09.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T12:06:55.653 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-volume.log 2026-03-10T12:06:55.653 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-client.ceph-exporter.vm09.log 2026-03-10T12:06:55.654 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-volume.log: 92.1% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T12:06:55.654 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mgr.vm09.xttkce.log 2026-03-10T12:06:55.655 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mon.vm00.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.audit.log 2026-03-10T12:06:55.655 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-client.ceph-exporter.vm09.log: 28.6% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-client.ceph-exporter.vm09.log.gz 2026-03-10T12:06:55.655 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log: 84.3% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log.gz 2026-03-10T12:06:55.658 INFO:teuthology.orchestra.run.vm09.stderr: 95.7% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-volume.log.gz 2026-03-10T12:06:55.658 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mon.vm09.log 2026-03-10T12:06:55.659 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mgr.vm09.xttkce.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.audit.log 2026-03-10T12:06:55.660 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mgr.vm00.pahkwb.log 2026-03-10T12:06:55.661 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mon.vm09.log: 91.1% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mgr.vm09.xttkce.log.gz 2026-03-10T12:06:55.661 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log 2026-03-10T12:06:55.662 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.audit.log: 90.8% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.audit.log.gz 2026-03-10T12:06:55.662 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.cephadm.log 2026-03-10T12:06:55.663 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log: 82.7% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.log.gz 2026-03-10T12:06:55.663 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.0.log 2026-03-10T12:06:55.663 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.audit.log: 90.7% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.audit.log.gz 2026-03-10T12:06:55.664 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.cephadm.log: 80.9% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.cephadm.log.gz 2026-03-10T12:06:55.664 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.3.log 2026-03-10T12:06:55.664 INFO:teuthology.orchestra.run.vm00.stderr: 92.0% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T12:06:55.664 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.cephadm.log 2026-03-10T12:06:55.669 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mgr.vm00.pahkwb.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-volume.log 2026-03-10T12:06:55.670 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.cephadm.log: 82.5% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph.cephadm.log.gz 2026-03-10T12:06:55.674 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-client.ceph-exporter.vm00.log 2026-03-10T12:06:55.675 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.4.log 2026-03-10T12:06:55.680 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.6.log 2026-03-10T12:06:55.683 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.1.log 2026-03-10T12:06:55.683 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.4.log: 92.0% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mon.vm09.log.gz 2026-03-10T12:06:55.683 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-client.ceph-exporter.vm00.log: 90.6% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-client.ceph-exporter.vm00.log.gz 2026-03-10T12:06:55.685 INFO:teuthology.orchestra.run.vm00.stderr: 95.7% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-volume.log.gz 2026-03-10T12:06:55.693 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.2.log 2026-03-10T12:06:55.704 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.5.log 2026-03-10T12:06:55.707 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.6.log: 93.2% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.6.log.gz 2026-03-10T12:06:55.711 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.7.log 2026-03-10T12:06:55.714 INFO:teuthology.orchestra.run.vm09.stderr: 93.2% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.3.log.gz 2026-03-10T12:06:55.720 INFO:teuthology.orchestra.run.vm09.stderr: 93.0% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.0.log.gz 2026-03-10T12:06:55.722 INFO:teuthology.orchestra.run.vm09.stderr: 93.1% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.4.log.gz 2026-03-10T12:06:55.723 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-10T12:06:55.723 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.081s 2026-03-10T12:06:55.724 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.129s 2026-03-10T12:06:55.724 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.019s 2026-03-10T12:06:55.762 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.5.log: /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.7.log: 93.3% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.1.log.gz 2026-03-10T12:06:55.763 INFO:teuthology.orchestra.run.vm00.stderr: 93.0% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.2.log.gz 2026-03-10T12:06:55.769 INFO:teuthology.orchestra.run.vm00.stderr: 89.9% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mgr.vm00.pahkwb.log.gz 2026-03-10T12:06:55.770 INFO:teuthology.orchestra.run.vm00.stderr: 93.3% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.5.log.gz 2026-03-10T12:06:55.776 INFO:teuthology.orchestra.run.vm00.stderr: 93.6% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-osd.7.log.gz 2026-03-10T12:06:55.783 INFO:teuthology.orchestra.run.vm00.stderr: 91.5% -- replaced with /var/log/ceph/fba12862-1c78-11f1-b92d-892b8c98a56b/ceph-mon.vm00.log.gz 2026-03-10T12:06:55.784 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-10T12:06:55.784 INFO:teuthology.orchestra.run.vm00.stderr:real 0m0.145s 2026-03-10T12:06:55.784 INFO:teuthology.orchestra.run.vm00.stderr:user 0m0.248s 2026-03-10T12:06:55.784 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m0.027s 2026-03-10T12:06:55.785 INFO:tasks.cephadm:Archiving logs... 2026-03-10T12:06:55.785 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020/remote/vm00/log 2026-03-10T12:06:55.785 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T12:06:55.867 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020/remote/vm09/log 2026-03-10T12:06:55.867 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T12:06:55.904 INFO:tasks.cephadm:Removing cluster... 2026-03-10T12:06:55.904 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid fba12862-1c78-11f1-b92d-892b8c98a56b --force 2026-03-10T12:06:56.035 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:06:56.142 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid fba12862-1c78-11f1-b92d-892b8c98a56b --force 2026-03-10T12:06:56.275 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: fba12862-1c78-11f1-b92d-892b8c98a56b 2026-03-10T12:06:56.378 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T12:06:56.378 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T12:06:56.393 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T12:06:56.410 INFO:tasks.cephadm:Teardown complete 2026-03-10T12:06:56.410 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T12:06:56.531 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T12:06:56.531 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:06:56.533 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T12:06:56.546 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-10T12:06:56.548 INFO:teuthology.orchestra.run.vm09.stderr:bash: line 1: ntpq: command not found 2026-03-10T12:06:56.550 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T12:06:56.550 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-10T12:06:56.550 INFO:teuthology.orchestra.run.vm00.stdout:^+ ntp2.wup-de.hosts.301-mo> 2 6 377 22 -960us[ -939us] +/- 21ms 2026-03-10T12:06:56.550 INFO:teuthology.orchestra.run.vm00.stdout:^* time.cloudflare.com 3 6 377 21 -948us[ -927us] +/- 15ms 2026-03-10T12:06:56.550 INFO:teuthology.orchestra.run.vm00.stdout:^+ v2202508239286376495.ult> 2 6 377 20 +3535us[+3535us] +/- 19ms 2026-03-10T12:06:56.550 INFO:teuthology.orchestra.run.vm00.stdout:^+ ntp2.wtnet.de 2 6 377 22 -815us[ -794us] +/- 20ms 2026-03-10T12:06:56.551 INFO:teuthology.orchestra.run.vm09.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T12:06:56.551 INFO:teuthology.orchestra.run.vm09.stdout:=============================================================================== 2026-03-10T12:06:56.551 INFO:teuthology.orchestra.run.vm09.stdout:^* time.cloudflare.com 3 6 377 21 -1204us[-1225us] +/- 15ms 2026-03-10T12:06:56.551 INFO:teuthology.orchestra.run.vm09.stdout:^+ v2202508239286376495.ult> 2 6 377 21 +3056us[+3056us] +/- 19ms 2026-03-10T12:06:56.551 INFO:teuthology.orchestra.run.vm09.stdout:^+ ntp2.wtnet.de 2 6 377 22 -1158us[-1180us] +/- 20ms 2026-03-10T12:06:56.551 INFO:teuthology.orchestra.run.vm09.stdout:^+ ntp2.wup-de.hosts.301-mo> 2 6 377 22 -123us[ -144us] +/- 20ms 2026-03-10T12:06:56.551 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T12:06:56.649 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T12:06:56.649 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T12:06:56.652 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T12:06:56.712 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T12:06:56.736 INFO:teuthology.task.internal:Duration was 407.256066 seconds 2026-03-10T12:06:56.737 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T12:06:56.739 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T12:06:56.739 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T12:06:56.740 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T12:06:56.777 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T12:06:56.778 INFO:teuthology.orchestra.run.vm09.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T12:06:57.233 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T12:06:57.233 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-10T12:06:57.233 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T12:06:57.258 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-10T12:06:57.258 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T12:06:57.297 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T12:06:57.297 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:06:57.300 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:06:57.792 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T12:06:57.792 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:06:57.794 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T12:06:57.816 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:06:57.817 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:06:57.817 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:06:57.817 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T12:06:57.817 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T12:06:57.819 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T12:06:57.820 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T12:06:57.820 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T12:06:57.820 INFO:teuthology.orchestra.run.vm09.stderr: -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T12:06:57.820 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-10T12:06:57.948 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T12:06:57.963 INFO:teuthology.orchestra.run.vm09.stderr: 98.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T12:06:57.965 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T12:06:57.969 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T12:06:57.969 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T12:06:58.016 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T12:06:58.042 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T12:06:58.045 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:06:58.058 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:06:58.084 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-10T12:06:58.112 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-10T12:06:58.127 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:06:58.159 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:06:58.159 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T12:06:58.183 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T12:06:58.183 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T12:06:58.187 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T12:06:58.187 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020/remote/vm00 2026-03-10T12:06:58.187 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T12:06:58.230 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1020/remote/vm09 2026-03-10T12:06:58.230 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T12:06:58.258 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T12:06:58.258 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T12:06:58.271 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T12:06:58.313 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T12:06:58.317 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T12:06:58.318 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T12:06:58.321 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T12:06:58.321 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T12:06:58.326 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T12:06:58.340 INFO:teuthology.orchestra.run.vm00.stdout: 8532147 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 12:06 /home/ubuntu/cephtest 2026-03-10T12:06:58.371 INFO:teuthology.orchestra.run.vm09.stdout: 8532143 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 12:06 /home/ubuntu/cephtest 2026-03-10T12:06:58.372 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T12:06:58.378 INFO:teuthology.run:Summary data: description: orch/cephadm/smoke-roleless/{0-distro/centos_9.stream_runc 1-start 2-services/nfs2 3-final} duration: 407.2560656070709 owner: kyr success: true 2026-03-10T12:06:58.378 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T12:06:58.397 INFO:teuthology.run:pass