2026-03-10T09:30:54.017 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T09:30:54.021 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T09:30:54.040 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983 branch: squid description: orch/cephadm/osds/{0-distro/centos_9.stream 1-start 2-ops/rm-zap-add} email: null first_in_suite: false flavor: default job_id: '983' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - OSD_DOWN - CEPHADM_FAILED_DAEMON - but it is still running - PG_DEGRADED log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - client.0 - - host.b - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWa0rA5lj7/lT+37HyQ4Duu5thaVN53L3d9JfrCMM87pX8Sn61rnsTCH3vd4+PfWSx46FRYaZtRQnl1Td+8lco= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjYYlINt2+pyp2ZsYVlZP7YSmN6axWZtxa/EiFT1zDUQ565dD2MnLDKL3IGClGW9nOAJzuoWRQ1WDNTUdQdFsI= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - cephadm: roleless: true - cephadm.shell: host.a: - ceph orch status - ceph orch ps - ceph orch ls - ceph orch host ls - ceph orch device ls - ceph orch ls | grep '^osd.all-available-devices ' - cephadm.shell: host.a: - 'set -e set -x ceph orch ps ceph orch device ls DEVID=$(ceph device ls | grep osd.1 | awk ''{print $1}'') HOST=$(ceph orch device ls | grep $DEVID | awk ''{print $1}'') DEV=$(ceph orch device ls | grep $DEVID | awk ''{print $2}'') echo "host $HOST, dev $DEV, devid $DEVID" ceph orch osd rm 1 while ceph orch osd rm status | grep ^1 ; do sleep 5 ; done ceph orch device zap $HOST $DEV --force ceph orch daemon add osd $HOST:$DEV while ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done ' teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T09:30:54.040 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T09:30:54.040 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T09:30:54.040 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T09:30:54.040 INFO:teuthology.task.internal:Checking packages... 2026-03-10T09:30:54.040 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T09:30:54.040 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T09:30:54.040 INFO:teuthology.packaging:ref: None 2026-03-10T09:30:54.040 INFO:teuthology.packaging:tag: None 2026-03-10T09:30:54.040 INFO:teuthology.packaging:branch: squid 2026-03-10T09:30:54.040 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:30:54.040 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T09:30:54.756 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T09:30:54.757 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T09:30:54.758 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T09:30:54.758 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T09:30:54.758 INFO:teuthology.task.internal:Saving configuration 2026-03-10T09:30:54.762 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T09:30:54.763 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T09:30:54.769 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 09:29:40.377862', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMWa0rA5lj7/lT+37HyQ4Duu5thaVN53L3d9JfrCMM87pX8Sn61rnsTCH3vd4+PfWSx46FRYaZtRQnl1Td+8lco='} 2026-03-10T09:30:54.774 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 09:29:40.378335', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPjYYlINt2+pyp2ZsYVlZP7YSmN6axWZtxa/EiFT1zDUQ565dD2MnLDKL3IGClGW9nOAJzuoWRQ1WDNTUdQdFsI='} 2026-03-10T09:30:54.774 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T09:30:54.774 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['host.a', 'client.0'] 2026-03-10T09:30:54.774 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['host.b', 'client.1'] 2026-03-10T09:30:54.774 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T09:30:54.779 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-10T09:30:54.783 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-10T09:30:54.783 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f2ce14cfeb0>, signals=[15]) 2026-03-10T09:30:54.783 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T09:30:54.784 INFO:teuthology.task.internal:Opening connections... 2026-03-10T09:30:54.784 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-10T09:30:54.784 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:30:54.843 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-10T09:30:54.844 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:30:54.901 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T09:30:54.902 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-10T09:30:54.948 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-10T09:30:54.948 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:NAME="CentOS Stream" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="9" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:ID="centos" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE="rhel fedora" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="9" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:PLATFORM_ID="platform:el9" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:ANSI_COLOR="0;31" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:LOGO="fedora-logo-icon" 2026-03-10T09:30:55.002 INFO:teuthology.orchestra.run.vm01.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T09:30:55.003 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://centos.org/" 2026-03-10T09:30:55.003 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T09:30:55.003 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T09:30:55.003 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T09:30:55.003 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-10T09:30:55.007 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-10T09:30:55.021 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-10T09:30:55.021 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:NAME="CentOS Stream" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="9" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:ID="centos" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE="rhel fedora" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="9" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:PLATFORM_ID="platform:el9" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:ANSI_COLOR="0;31" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:LOGO="fedora-logo-icon" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://centos.org/" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T09:30:55.076 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T09:30:55.076 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-10T09:30:55.080 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T09:30:55.082 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T09:30:55.083 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T09:30:55.083 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-10T09:30:55.085 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-10T09:30:55.129 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T09:30:55.130 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T09:30:55.130 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-10T09:30:55.138 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-10T09:30:55.150 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T09:30:55.183 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T09:30:55.183 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T09:30:55.190 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-10T09:30:55.203 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:30:55.389 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-10T09:30:55.403 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:30:55.579 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T09:30:55.580 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T09:30:55.580 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T09:30:55.582 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T09:30:55.597 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T09:30:55.598 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T09:30:55.599 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T09:30:55.599 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T09:30:55.637 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T09:30:55.654 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T09:30:55.655 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T09:30:55.655 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T09:30:55.704 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:30:55.704 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T09:30:55.718 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:30:55.719 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T09:30:55.746 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T09:30:55.766 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:30:55.775 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:30:55.784 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:30:55.792 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T09:30:55.794 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T09:30:55.795 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T09:30:55.796 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T09:30:55.818 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T09:30:55.860 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T09:30:55.862 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T09:30:55.862 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T09:30:55.885 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T09:30:55.915 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:30:55.962 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:30:56.017 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:30:56.017 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T09:30:56.074 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:30:56.098 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:30:56.154 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:30:56.154 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T09:30:56.214 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-10T09:30:56.216 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-10T09:30:56.241 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:30:56.280 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:30:56.741 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T09:30:56.743 INFO:teuthology.task.internal:Starting timer... 2026-03-10T09:30:56.743 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T09:30:56.746 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T09:30:56.748 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T09:30:56.748 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-10T09:30:56.748 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-10T09:30:56.749 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T09:30:56.749 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T09:30:56.749 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T09:30:56.749 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T09:30:56.750 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T09:30:56.750 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T09:30:56.751 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T09:30:57.349 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T09:30:57.355 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T09:30:57.355 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryvoujs4d9 --limit vm01.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T09:33:31.703 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm08.local')] 2026-03-10T09:33:31.703 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-10T09:33:31.704 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:33:31.766 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-10T09:33:31.848 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-10T09:33:31.849 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-10T09:33:31.849 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T09:33:31.915 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-10T09:33:31.996 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-10T09:33:31.996 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T09:33:31.998 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T09:33:31.998 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T09:33:31.999 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:33:32.000 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T09:33:32.000 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:33:32.031 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T09:33:32.046 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T09:33:32.072 INFO:teuthology.orchestra.run.vm01.stderr:sudo: ntpd: command not found 2026-03-10T09:33:32.075 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T09:33:32.085 INFO:teuthology.orchestra.run.vm01.stdout:506 Cannot talk to daemon 2026-03-10T09:33:32.093 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T09:33:32.101 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T09:33:32.117 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T09:33:32.127 INFO:teuthology.orchestra.run.vm08.stderr:sudo: ntpd: command not found 2026-03-10T09:33:32.140 INFO:teuthology.orchestra.run.vm08.stdout:506 Cannot talk to daemon 2026-03-10T09:33:32.154 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T09:33:32.169 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T09:33:32.169 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:33:32.222 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:33:32.280 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:33:32.280 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T09:33:32.280 INFO:teuthology.orchestra.run.vm08.stdout:^? x1.ncomputers.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.280 INFO:teuthology.orchestra.run.vm08.stdout:^? ntp1.wtnet.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.280 INFO:teuthology.orchestra.run.vm08.stdout:^? 193.158.22.13 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.280 INFO:teuthology.orchestra.run.vm08.stdout:^? 141.98.138.220 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.281 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:33:32.281 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T09:33:32.281 INFO:teuthology.orchestra.run.vm01.stdout:^? 141.98.138.220 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.281 INFO:teuthology.orchestra.run.vm01.stdout:^? x1.ncomputers.org 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.281 INFO:teuthology.orchestra.run.vm01.stdout:^? ntp1.wtnet.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.281 INFO:teuthology.orchestra.run.vm01.stdout:^? 193.158.22.13 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T09:33:32.281 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T09:33:32.284 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T09:33:32.284 DEBUG:teuthology.orchestra.run.vm01:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T09:33:32.284 DEBUG:teuthology.orchestra.run.vm08:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T09:33:32.322 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf remove nvme-cli -y 2026-03-10T09:33:32.323 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T09:33:32.323 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm08.local 2026-03-10T09:33:32.323 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T09:33:32.323 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T09:33:32.323 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf remove nvme-cli -y 2026-03-10T09:33:32.323 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T09:33:32.323 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm01.local 2026-03-10T09:33:32.323 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T09:33:32.323 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T09:33:32.533 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: nvme-cli 2026-03-10T09:33:32.533 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T09:33:32.537 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:33:32.537 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T09:33:32.537 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:33:32.542 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: nvme-cli 2026-03-10T09:33:32.542 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T09:33:32.549 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:33:32.550 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T09:33:32.550 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:33:32.920 INFO:teuthology.orchestra.run.vm01.stdout:Last metadata expiration check: 0:02:01 ago on Tue 10 Mar 2026 09:31:31 AM UTC. 2026-03-10T09:33:33.019 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: Package Architecture Version Repository Size 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Install 6 Packages 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 2.3 M 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Installed size: 11 M 2026-03-10T09:33:33.020 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T09:33:33.071 INFO:teuthology.orchestra.run.vm08.stdout:Last metadata expiration check: 0:01:33 ago on Tue 10 Mar 2026 09:32:00 AM UTC. 2026-03-10T09:33:33.162 INFO:teuthology.orchestra.run.vm01.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 1.8 MB/s | 44 kB 00:00 2026-03-10T09:33:33.183 INFO:teuthology.orchestra.run.vm01.stdout:(2/6): python3-kmod-0.9-32.el9.x86_64.rpm 3.9 MB/s | 84 kB 00:00 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: Package Architecture Version Repository Size 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T09:33:33.200 INFO:teuthology.orchestra.run.vm08.stdout:Install 6 Packages 2026-03-10T09:33:33.201 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:33:33.201 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 2.3 M 2026-03-10T09:33:33.201 INFO:teuthology.orchestra.run.vm08.stdout:Installed size: 11 M 2026-03-10T09:33:33.201 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T09:33:33.201 INFO:teuthology.orchestra.run.vm01.stdout:(3/6): python3-configshell-1.1.30-1.el9.noarch. 1.1 MB/s | 72 kB 00:00 2026-03-10T09:33:33.212 INFO:teuthology.orchestra.run.vm01.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 5.1 MB/s | 150 kB 00:00 2026-03-10T09:33:33.280 INFO:teuthology.orchestra.run.vm01.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 8.1 MB/s | 1.2 MB 00:00 2026-03-10T09:33:33.284 INFO:teuthology.orchestra.run.vm01.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 9.9 MB/s | 837 kB 00:00 2026-03-10T09:33:33.284 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:33:33.284 INFO:teuthology.orchestra.run.vm01.stdout:Total 8.8 MB/s | 2.3 MB 00:00 2026-03-10T09:33:33.344 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T09:33:33.352 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T09:33:33.352 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T09:33:33.404 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T09:33:33.405 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T09:33:33.561 INFO:teuthology.orchestra.run.vm08.stdout:(1/6): python3-configshell-1.1.30-1.el9.noarch. 280 kB/s | 72 kB 00:00 2026-03-10T09:33:33.564 INFO:teuthology.orchestra.run.vm08.stdout:(2/6): nvmetcli-0.8-3.el9.noarch.rpm 169 kB/s | 44 kB 00:00 2026-03-10T09:33:33.574 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T09:33:33.585 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T09:33:33.596 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T09:33:33.604 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T09:33:33.612 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T09:33:33.615 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T09:33:33.629 INFO:teuthology.orchestra.run.vm08.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 1.2 MB/s | 84 kB 00:00 2026-03-10T09:33:33.695 INFO:teuthology.orchestra.run.vm08.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.1 MB/s | 150 kB 00:00 2026-03-10T09:33:33.772 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T09:33:33.778 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T09:33:33.819 INFO:teuthology.orchestra.run.vm08.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 2.2 MB/s | 1.2 MB 00:00 2026-03-10T09:33:33.829 INFO:teuthology.orchestra.run.vm08.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 4.1 MB/s | 837 kB 00:00 2026-03-10T09:33:33.829 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:33:33.829 INFO:teuthology.orchestra.run.vm08.stdout:Total 3.7 MB/s | 2.3 MB 00:00 2026-03-10T09:33:33.906 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T09:33:33.914 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T09:33:33.915 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T09:33:33.978 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T09:33:33.978 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T09:33:34.147 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T09:33:34.147 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T09:33:34.147 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:33:34.168 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T09:33:34.180 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T09:33:34.190 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T09:33:34.200 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T09:33:34.212 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T09:33:34.213 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T09:33:34.399 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T09:33:34.407 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T09:33:34.731 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T09:33:34.731 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T09:33:34.731 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T09:33:34.731 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T09:33:34.731 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T09:33:34.811 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T09:33:34.811 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T09:33:34.811 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:33:34.835 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:33:34.836 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T09:33:34.910 DEBUG:teuthology.parallel:result is None 2026-03-10T09:33:35.378 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T09:33:35.378 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T09:33:35.378 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T09:33:35.378 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T09:33:35.379 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:33:35.463 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T09:33:35.522 DEBUG:teuthology.parallel:result is None 2026-03-10T09:33:35.522 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T09:33:35.571 INFO:tasks.cephadm:Config: {'roleless': True, 'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'OSD_DOWN', 'CEPHADM_FAILED_DAEMON', 'but it is still running', 'PG_DEGRADED'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T09:33:35.571 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:33:35.572 INFO:tasks.cephadm:Cluster fsid is 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:33:35.572 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T09:33:35.572 INFO:tasks.cephadm:No mon roles; fabricating mons 2026-03-10T09:33:35.572 INFO:tasks.cephadm:Monitor IPs: {'mon.vm01': '192.168.123.101', 'mon.vm08': '192.168.123.108'} 2026-03-10T09:33:35.572 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T09:33:35.572 DEBUG:teuthology.orchestra.run.vm01:> sudo hostname $(hostname -s) 2026-03-10T09:33:35.603 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-10T09:33:35.646 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T09:33:35.646 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:33:36.316 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T09:33:36.943 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:33:36.944 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T09:33:36.944 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T09:33:36.944 DEBUG:teuthology.orchestra.run.vm01:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T09:33:38.791 INFO:teuthology.orchestra.run.vm01.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 09:33 /home/ubuntu/cephtest/cephadm 2026-03-10T09:33:38.792 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T09:33:40.197 INFO:teuthology.orchestra.run.vm08.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 09:33 /home/ubuntu/cephtest/cephadm 2026-03-10T09:33:40.197 DEBUG:teuthology.orchestra.run.vm01:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T09:33:40.217 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T09:33:40.238 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T09:33:40.238 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T09:33:40.261 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T09:33:40.440 INFO:teuthology.orchestra.run.vm01.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:33:40.491 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:34:15.232 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-10T09:34:15.233 INFO:teuthology.orchestra.run.vm01.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T09:34:15.233 INFO:teuthology.orchestra.run.vm01.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T09:34:15.233 INFO:teuthology.orchestra.run.vm01.stdout: "repo_digests": [ 2026-03-10T09:34:15.233 INFO:teuthology.orchestra.run.vm01.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T09:34:15.233 INFO:teuthology.orchestra.run.vm01.stdout: ] 2026-03-10T09:34:15.233 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-10T09:34:21.007 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-10T09:34:21.026 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph 2026-03-10T09:34:21.056 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-10T09:34:21.082 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /etc/ceph 2026-03-10T09:34:21.118 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-10T09:34:21.147 INFO:tasks.cephadm:Writing seed config... 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T09:34:21.148 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-10T09:34:21.148 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:34:21.148 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T09:34:21.173 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 362248b4-1c64-11f1-a99c-11af91d3124e [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T09:34:21.173 DEBUG:teuthology.orchestra.run.vm01:mon.vm01> sudo journalctl -f -n 0 -u ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01.service 2026-03-10T09:34:21.214 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T09:34:21.214 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 362248b4-1c64-11f1-a99c-11af91d3124e --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-ip 192.168.123.101 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:34:21.349 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T09:34:21.350 INFO:teuthology.orchestra.run.vm01.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '362248b4-1c64-11f1-a99c-11af91d3124e', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-ip', '192.168.123.101', '--skip-admin-label'] 2026-03-10T09:34:21.350 INFO:teuthology.orchestra.run.vm01.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T09:34:21.350 INFO:teuthology.orchestra.run.vm01.stdout:Verifying podman|docker is present... 2026-03-10T09:34:21.374 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T09:34:21.374 INFO:teuthology.orchestra.run.vm01.stdout:Verifying lvm2 is present... 2026-03-10T09:34:21.374 INFO:teuthology.orchestra.run.vm01.stdout:Verifying time synchronization is in place... 2026-03-10T09:34:21.381 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T09:34:21.381 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T09:34:21.386 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T09:34:21.386 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T09:34:21.390 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T09:34:21.395 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T09:34:21.395 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T09:34:21.395 INFO:teuthology.orchestra.run.vm01.stdout:Repeating the final host check... 2026-03-10T09:34:21.414 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T09:34:21.414 INFO:teuthology.orchestra.run.vm01.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T09:34:21.414 INFO:teuthology.orchestra.run.vm01.stdout:systemctl is present 2026-03-10T09:34:21.414 INFO:teuthology.orchestra.run.vm01.stdout:lvcreate is present 2026-03-10T09:34:21.419 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T09:34:21.419 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T09:34:21.425 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T09:34:21.425 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T09:34:21.430 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:Host looks OK 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:Cluster fsid: 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:Acquiring lock 140006371349312 on /run/cephadm/362248b4-1c64-11f1-a99c-11af91d3124e.lock 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:Lock 140006371349312 acquired on /run/cephadm/362248b4-1c64-11f1-a99c-11af91d3124e.lock 2026-03-10T09:34:21.437 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 3300 ... 2026-03-10T09:34:21.438 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 6789 ... 2026-03-10T09:34:21.438 INFO:teuthology.orchestra.run.vm01.stdout:Base mon IP(s) is [192.168.123.101:3300, 192.168.123.101:6789], mon addrv is [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T09:34:21.440 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.101 metric 100 2026-03-10T09:34:21.440 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.101 metric 100 2026-03-10T09:34:21.443 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T09:34:21.443 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T09:34:21.446 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T09:34:21.446 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T09:34:21.446 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:1/64 scope link noprefixroute 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T09:34:21.447 INFO:teuthology.orchestra.run.vm01.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T09:34:21.448 INFO:teuthology.orchestra.run.vm01.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T09:34:22.661 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T09:34:22.783 INFO:teuthology.orchestra.run.vm01.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T09:34:22.784 INFO:teuthology.orchestra.run.vm01.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T09:34:22.784 INFO:teuthology.orchestra.run.vm01.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T09:34:22.892 INFO:teuthology.orchestra.run.vm01.stdout:stat: stdout 167 167 2026-03-10T09:34:22.892 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial keys... 2026-03-10T09:34:23.004 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQCe5a9p6XCHORAARpXLUZUm5krDfv0gCQWtLQ== 2026-03-10T09:34:23.108 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQCf5a9pOppVBBAAercRhdFlef6+2h/qZohbfw== 2026-03-10T09:34:23.205 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQCf5a9pD2BBChAAU64jx6KCqZsVEPZ5AkAv8w== 2026-03-10T09:34:23.205 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial monmap... 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool for vm01 [v2:192.168.123.101:3300,v1:192.168.123.101:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = quincy 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: set fsid to 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:23.300 INFO:teuthology.orchestra.run.vm01.stdout:Creating mon... 2026-03-10T09:34:23.417 INFO:teuthology.orchestra.run.vm01.stdout:create mon.vm01 on 2026-03-10T09:34:23.691 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T09:34:23.813 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-362248b4-1c64-11f1-a99c-11af91d3124e.target → /etc/systemd/system/ceph-362248b4-1c64-11f1-a99c-11af91d3124e.target. 2026-03-10T09:34:23.813 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-362248b4-1c64-11f1-a99c-11af91d3124e.target → /etc/systemd/system/ceph-362248b4-1c64-11f1-a99c-11af91d3124e.target. 2026-03-10T09:34:23.953 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01 2026-03-10T09:34:23.953 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01.service: Unit ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01.service not loaded. 2026-03-10T09:34:24.084 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-362248b4-1c64-11f1-a99c-11af91d3124e.target.wants/ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01.service → /etc/systemd/system/ceph-362248b4-1c64-11f1-a99c-11af91d3124e@.service. 2026-03-10T09:34:24.240 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:34:24.240 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T09:34:24.240 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon to start... 2026-03-10T09:34:24.240 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon... 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout id: 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout services: 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum vm01 (age 0.130138s) 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout data: 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T09:34:24.435 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T09:34:24.436 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T09:34:24.436 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:24.436 INFO:teuthology.orchestra.run.vm01.stdout:mon is available 2026-03-10T09:34:24.436 INFO:teuthology.orchestra.run.vm01.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T09:34:24.594 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:24.594 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T09:34:24.594 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:24.594 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T09:34:24.594 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T09:34:24.594 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T09:34:24.595 INFO:teuthology.orchestra.run.vm01.stdout:Generating new minimal ceph.conf... 2026-03-10T09:34:24.780 INFO:teuthology.orchestra.run.vm01.stdout:Restarting the monitor... 2026-03-10T09:34:24.890 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01[50602]: 2026-03-10T09:34:24.858+0000 7f8310ecb640 -1 mon.vm01@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T09:34:25.093 INFO:teuthology.orchestra.run.vm01.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 podman[50806]: 2026-03-10 09:34:24.891921388 +0000 UTC m=+0.045339045 container died b5bc3ecc69cf1c4dd13682f58db693998c68141b5abb44ea9eef97c3035738a8 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 podman[50806]: 2026-03-10 09:34:24.907870979 +0000 UTC m=+0.061288636 container remove b5bc3ecc69cf1c4dd13682f58db693998c68141b5abb44ea9eef97c3035738a8 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 bash[50806]: ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 systemd[1]: ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01.service: Deactivated successfully. 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 systemd[1]: Stopped Ceph mon.vm01 for 362248b4-1c64-11f1-a99c-11af91d3124e. 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:24 vm01 systemd[1]: Starting Ceph mon.vm01 for 362248b4-1c64-11f1-a99c-11af91d3124e... 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 podman[50873]: 2026-03-10 09:34:25.050841274 +0000 UTC m=+0.015070165 container create 43041d34ec15f86df059c60a1760ed17e81dc58fc34ca6ce177de37dff03561b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 podman[50873]: 2026-03-10 09:34:25.085159562 +0000 UTC m=+0.049388453 container init 43041d34ec15f86df059c60a1760ed17e81dc58fc34ca6ce177de37dff03561b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 podman[50873]: 2026-03-10 09:34:25.087643922 +0000 UTC m=+0.051872813 container start 43041d34ec15f86df059c60a1760ed17e81dc58fc34ca6ce177de37dff03561b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 bash[50873]: 43041d34ec15f86df059c60a1760ed17e81dc58fc34ca6ce177de37dff03561b 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 podman[50873]: 2026-03-10 09:34:25.044871482 +0000 UTC m=+0.009100382 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 systemd[1]: Started Ceph mon.vm01 for 362248b4-1c64-11f1-a99c-11af91d3124e. 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: pidfile_write: ignore empty --pid-file 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: load: jerasure load: lrc 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: RocksDB version: 7.9.2 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Git sha 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: DB SUMMARY 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: DB Session ID: 448YBWE9LDTJ26Q9MM16 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: CURRENT file: CURRENT 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: SST files in /var/lib/ceph/mon/ceph-vm01/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-vm01/store.db: 000009.log size: 75099 ; 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.error_if_exists: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.create_if_missing: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.paranoid_checks: 1 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.env: 0x55bc6059bdc0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.info_log: 0x55bc61094de0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.statistics: (nil) 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.use_fsync: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_log_file_size: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.allow_fallocate: 1 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.use_direct_reads: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T09:34:25.150 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.db_log_dir: 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.wal_dir: 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.write_buffer_manager: 0x55bc61099900 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.unordered_write: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.row_cache: None 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.wal_filter: None 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.two_write_queues: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.wal_compression: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.atomic_flush: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.log_readahead_size: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_background_jobs: 2 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_background_compactions: -1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_subcompactions: 1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_open_files: -1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_background_flushes: -1 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Compression algorithms supported: 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kZSTD supported: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kXpressCompression supported: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kBZip2Compression supported: 0 2026-03-10T09:34:25.151 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kLZ4Compression supported: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kZlibCompression supported: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: kSnappyCompression supported: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-vm01/store.db/MANIFEST-000010 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.merge_operator: 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_filter: None 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bc610945c0) 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: cache_index_and_filter_blocks: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: pin_top_level_index_and_filter: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: index_type: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: data_block_index_type: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: index_shortening: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: checksum: 4 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: no_block_cache: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_cache: 0x55bc610b9350 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_cache_name: BinnedLRUCache 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_cache_options: 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: capacity : 536870912 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: num_shard_bits : 4 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: strict_capacity_limit : 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: high_pri_pool_ratio: 0.000 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_cache_compressed: (nil) 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: persistent_cache: (nil) 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_size: 4096 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_size_deviation: 10 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_restart_interval: 16 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: index_block_restart_interval: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: metadata_block_size: 4096 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: partition_filters: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: use_delta_encoding: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: filter_policy: bloomfilter 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: whole_key_filtering: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: verify_compression: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: read_amp_bytes_per_bit: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: format_version: 5 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: enable_index_compression: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: block_align: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: max_auto_readahead_size: 262144 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: prepopulate_block_cache: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: initial_auto_readahead_size: 8192 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression: NoCompression 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.num_levels: 7 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T09:34:25.152 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.inplace_update_support: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.bloom_locality: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.max_successive_merges: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.ttl: 2592000 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T09:34:25.153 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.enable_blob_files: false 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.min_blob_size: 0 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-vm01/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0e68d463-90b4-4ca2-87c5-6b77ff8b5b6f 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773135265110575, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773135265112153, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 223, "table_properties": {"data_size": 70446, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9562, "raw_average_key_size": 49, "raw_value_size": 65071, "raw_average_value_size": 335, "num_data_blocks": 8, "num_entries": 194, "num_filter_entries": 194, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773135265, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0e68d463-90b4-4ca2-87c5-6b77ff8b5b6f", "db_session_id": "448YBWE9LDTJ26Q9MM16", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773135265112203, "job": 1, "event": "recovery_finished"} 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-vm01/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bc610bae00 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: rocksdb: DB pointer 0x55bc611c6000 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: starting mon.vm01 rank 0 at public addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] at bind addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon_data /var/lib/ceph/mon/ceph-vm01 fsid 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???) e1 preinit fsid 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).mds e1 new map 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).mds e1 print_map 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: e1 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: btime 2026-03-10T09:34:24:264237+0000 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: legacy client fscid: -1 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout: No filesystems configured 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T09:34:25.154 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T09:34:25.290 INFO:teuthology.orchestra.run.vm01.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T09:34:25.291 INFO:teuthology.orchestra.run.vm01.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:34:25.291 INFO:teuthology.orchestra.run.vm01.stdout:Creating mgr... 2026-03-10T09:34:25.291 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T09:34:25.291 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T09:34:25.292 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8443 ... 2026-03-10T09:34:25.429 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mgr.vm01.itvfys 2026-03-10T09:34:25.429 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mgr.vm01.itvfys.service: Unit ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mgr.vm01.itvfys.service not loaded. 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mon.vm01 is new leader, mons vm01 in quorum (ranks 0) 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: monmap epoch 1 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: fsid 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: last_changed 2026-03-10T09:34:23.279412+0000 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: created 2026-03-10T09:34:23.279412+0000 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: min_mon_release 19 (squid) 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: election_strategy: 1 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.vm01 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: fsmap 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T09:34:25.435 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:25 vm01 ceph-mon[50888]: mgrmap e1: no daemons active 2026-03-10T09:34:25.547 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-362248b4-1c64-11f1-a99c-11af91d3124e.target.wants/ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mgr.vm01.itvfys.service → /etc/systemd/system/ceph-362248b4-1c64-11f1-a99c-11af91d3124e@.service. 2026-03-10T09:34:25.704 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:34:25.704 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T09:34:25.704 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:34:25.704 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[9283, 8765, 8443]>. firewalld.service is not available 2026-03-10T09:34:25.704 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr to start... 2026-03-10T09:34:25.704 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr... 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "362248b4-1c64-11f1-a99c-11af91d3124e", 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:25.920 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "vm01" 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:34:24:264237+0000", 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:34:25.921 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:34:24.264761+0000", 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:25.922 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (1/15)... 2026-03-10T09:34:26.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:26 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3331866045' entity='client.admin' 2026-03-10T09:34:26.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:26 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3487111236' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "362248b4-1c64-11f1-a99c-11af91d3124e", 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "vm01" 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:34:28.146 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:34:24:264237+0000", 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:34:24.264761+0000", 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:28.147 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (2/15)... 2026-03-10T09:34:28.445 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:28 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/145835595' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:34:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: Activating manager daemon vm01.itvfys 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: mgrmap e2: vm01.itvfys(active, starting, since 0.0043747s) 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr metadata", "who": "vm01.itvfys", "id": "vm01.itvfys"}]: dispatch 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: Manager daemon vm01.itvfys is now available 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' 2026-03-10T09:34:29.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:29 vm01 ceph-mon[50888]: from='mgr.14100 192.168.123.101:0/3034760218' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/trash_purge_schedule"}]: dispatch 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "362248b4-1c64-11f1-a99c-11af91d3124e", 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "vm01" 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T09:34:30.439 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T09:34:24:264237+0000", 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T09:34:24.264761+0000", 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:30.440 INFO:teuthology.orchestra.run.vm01.stdout:mgr is available 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T09:34:30.681 INFO:teuthology.orchestra.run.vm01.stdout:Enabling cephadm module... 2026-03-10T09:34:30.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:30 vm01 ceph-mon[50888]: mgrmap e3: vm01.itvfys(active, since 1.00915s) 2026-03-10T09:34:30.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:30 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1271954727' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T09:34:30.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:30 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2872089462' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T09:34:32.071 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:31 vm01 ceph-mon[50888]: mgrmap e4: vm01.itvfys(active, since 2s) 2026-03-10T09:34:32.071 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:31 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3910466678' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T09:34:32.102 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "vm01.itvfys", 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T09:34:32.103 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 5... 2026-03-10T09:34:33.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:32 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3910466678' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T09:34:33.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:32 vm01 ceph-mon[50888]: mgrmap e5: vm01.itvfys(active, since 3s) 2026-03-10T09:34:33.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:32 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1418387743' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:34:35.228 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: Active manager daemon vm01.itvfys restarted 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: Activating manager daemon vm01.itvfys 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: mgrmap e6: vm01.itvfys(active, starting, since 0.00564271s) 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr metadata", "who": "vm01.itvfys", "id": "vm01.itvfys"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: Manager daemon vm01.itvfys is now available 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:34:35.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:34 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/trash_purge_schedule"}]: dispatch 2026-03-10T09:34:36.003 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:36.003 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T09:34:36.003 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T09:34:36.003 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:36.003 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 5 is available 2026-03-10T09:34:36.003 INFO:teuthology.orchestra.run.vm01.stdout:Setting orchestrator backend to cephadm... 2026-03-10T09:34:36.520 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:36 vm01 ceph-mon[50888]: Found migration_current of "None". Setting to last migration. 2026-03-10T09:34:36.520 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:36 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:36.520 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:36 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:36.520 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:36 vm01 ceph-mon[50888]: mgrmap e7: vm01.itvfys(active, since 1.00898s) 2026-03-10T09:34:36.520 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:36 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:36.520 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:36 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:34:36.563 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T09:34:36.563 INFO:teuthology.orchestra.run.vm01.stdout:Generating ssh key... 2026-03-10T09:34:37.065 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJsCzKQ2kuGgj4VaaPGqKk6FYscV9BAeEj8iK44zdrZc ceph-362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:37.065 INFO:teuthology.orchestra.run.vm01.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:34:37.065 INFO:teuthology.orchestra.run.vm01.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T09:34:37.065 INFO:teuthology.orchestra.run.vm01.stdout:Adding host vm01... 2026-03-10T09:34:37.281 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:36] ENGINE Bus STARTING 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:36] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:36] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:36] ENGINE Bus STARTED 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:36] ENGINE Client ('192.168.123.101', 50192) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: Generating ssh key... 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:37.282 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:37 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:38.739 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Added host 'vm01' with addr '192.168.123.101' 2026-03-10T09:34:38.739 INFO:teuthology.orchestra.run.vm01.stdout:Deploying mon service with default placement... 2026-03-10T09:34:38.972 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:38 vm01 ceph-mon[50888]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:38.972 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:38 vm01 ceph-mon[50888]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:38.972 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:38 vm01 ceph-mon[50888]: Deploying cephadm binary to vm01 2026-03-10T09:34:38.972 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:38 vm01 ceph-mon[50888]: mgrmap e8: vm01.itvfys(active, since 2s) 2026-03-10T09:34:38.972 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:38 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:38.972 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:38 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:34:38.996 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T09:34:38.996 INFO:teuthology.orchestra.run.vm01.stdout:Deploying mgr service with default placement... 2026-03-10T09:34:39.311 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T09:34:39.311 INFO:teuthology.orchestra.run.vm01.stdout:Deploying crash service with default placement... 2026-03-10T09:34:39.553 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled crash update... 2026-03-10T09:34:39.553 INFO:teuthology.orchestra.run.vm01.stdout:Deploying ceph-exporter service with default placement... 2026-03-10T09:34:39.824 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled ceph-exporter update... 2026-03-10T09:34:39.824 INFO:teuthology.orchestra.run.vm01.stdout:Deploying prometheus service with default placement... 2026-03-10T09:34:40.064 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:39 vm01 ceph-mon[50888]: Added host vm01 2026-03-10T09:34:40.064 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:39 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.064 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:39 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.064 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:39 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.064 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:39 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.114 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled prometheus update... 2026-03-10T09:34:40.114 INFO:teuthology.orchestra.run.vm01.stdout:Deploying grafana service with default placement... 2026-03-10T09:34:40.408 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled grafana update... 2026-03-10T09:34:40.408 INFO:teuthology.orchestra.run.vm01.stdout:Deploying node-exporter service with default placement... 2026-03-10T09:34:40.682 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled node-exporter update... 2026-03-10T09:34:40.682 INFO:teuthology.orchestra.run.vm01.stdout:Deploying alertmanager service with default placement... 2026-03-10T09:34:40.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:40.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: Saving service mon spec with placement count:5 2026-03-10T09:34:40.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:40.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: Saving service mgr spec with placement count:2 2026-03-10T09:34:40.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: Saving service crash spec with placement * 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "ceph-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: Saving service ceph-exporter spec with placement * 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:40.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:40 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:41.558 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled alertmanager update... 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: Saving service prometheus spec with placement count:1 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: Saving service grafana spec with placement count:1 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: Saving service node-exporter spec with placement * 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:41.796 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:41 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:42.158 INFO:teuthology.orchestra.run.vm01.stdout:Enabling the dashboard module... 2026-03-10T09:34:42.795 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:42 vm01 ceph-mon[50888]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:42.795 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:42 vm01 ceph-mon[50888]: Saving service alertmanager spec with placement count:1 2026-03-10T09:34:42.795 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:42 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1608313738' entity='client.admin' 2026-03-10T09:34:43.228 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:42 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3999854854' entity='client.admin' 2026-03-10T09:34:43.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:42 vm01 ceph-mon[50888]: from='mgr.14118 192.168.123.101:0/3585301713' entity='mgr.vm01.itvfys' 2026-03-10T09:34:43.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:42 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2074754695' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "vm01.itvfys", 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T09:34:43.666 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 9... 2026-03-10T09:34:44.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:44 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2074754695' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T09:34:44.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:44 vm01 ceph-mon[50888]: mgrmap e9: vm01.itvfys(active, since 8s) 2026-03-10T09:34:44.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:44 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2374256988' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: Active manager daemon vm01.itvfys restarted 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: Activating manager daemon vm01.itvfys 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: mgrmap e10: vm01.itvfys(active, starting, since 0.00718559s) 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr metadata", "who": "vm01.itvfys", "id": "vm01.itvfys"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: Manager daemon vm01.itvfys is now available 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:34:46.730 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:46 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:34:47.457 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T09:34:47.457 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-10T09:34:47.457 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T09:34:47.457 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T09:34:47.457 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 9 is available 2026-03-10T09:34:47.457 INFO:teuthology.orchestra.run.vm01.stdout:Generating a dashboard self-signed certificate... 2026-03-10T09:34:47.657 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:47 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/trash_purge_schedule"}]: dispatch 2026-03-10T09:34:47.657 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:47 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:47.657 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:47 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:47.657 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:47 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:47] ENGINE Bus STARTING 2026-03-10T09:34:47.657 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:47 vm01 ceph-mon[50888]: mgrmap e11: vm01.itvfys(active, since 1.00972s) 2026-03-10T09:34:47.768 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T09:34:47.768 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial admin user... 2026-03-10T09:34:48.155 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$0AUSCRo414M1nQ89e14JUuumbhhPSuaNGZpFirFwGFYaqZR3h.RC.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773135288, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T09:34:48.155 INFO:teuthology.orchestra.run.vm01.stdout:Fetching dashboard port number... 2026-03-10T09:34:48.370 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T09:34:48.370 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T09:34:48.370 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T09:34:48.371 INFO:teuthology.orchestra.run.vm01.stdout:Ceph Dashboard is now available at: 2026-03-10T09:34:48.371 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.371 INFO:teuthology.orchestra.run.vm01.stdout: URL: https://vm01.local:8443/ 2026-03-10T09:34:48.371 INFO:teuthology.orchestra.run.vm01.stdout: User: admin 2026-03-10T09:34:48.371 INFO:teuthology.orchestra.run.vm01.stdout: Password: w9fpgnxe2z 2026-03-10T09:34:48.371 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.372 INFO:teuthology.orchestra.run.vm01.stdout:Saving cluster configuration to /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config directory 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: ceph telemetry on 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout:For more information see: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:48.674 INFO:teuthology.orchestra.run.vm01.stdout:Bootstrap complete. 2026-03-10T09:34:48.700 INFO:tasks.cephadm:Fetching config... 2026-03-10T09:34:48.700 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:34:48.700 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T09:34:48.735 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T09:34:48.735 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:34:48.735 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T09:34:48.804 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T09:34:48.804 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:34:48.804 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/keyring of=/dev/stdout 2026-03-10T09:34:48.869 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T09:34:48.869 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:34:48.869 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T09:34:48.934 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T09:34:48.934 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJsCzKQ2kuGgj4VaaPGqKk6FYscV9BAeEj8iK44zdrZc ceph-362248b4-1c64-11f1-a99c-11af91d3124e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:47] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:47] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:47] ENGINE Bus STARTED 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: [10/Mar/2026:09:34:47] ENGINE Client ('192.168.123.101', 41184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:48.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/405847400' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T09:34:48.980 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:48 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2588304752' entity='client.admin' 2026-03-10T09:34:49.016 INFO:teuthology.orchestra.run.vm01.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJsCzKQ2kuGgj4VaaPGqKk6FYscV9BAeEj8iK44zdrZc ceph-362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:49.026 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJsCzKQ2kuGgj4VaaPGqKk6FYscV9BAeEj8iK44zdrZc ceph-362248b4-1c64-11f1-a99c-11af91d3124e' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T09:34:49.056 INFO:teuthology.orchestra.run.vm08.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJsCzKQ2kuGgj4VaaPGqKk6FYscV9BAeEj8iK44zdrZc ceph-362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:34:49.065 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T09:34:49.235 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:34:49.537 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T09:34:49.537 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T09:34:49.746 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:34:50.023 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-10T09:34:50.024 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:34:50.024 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-10T09:34:50.040 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:34:50.040 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:34:50.095 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-10T09:34:50.095 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph orch host add vm08 2026-03-10T09:34:50.261 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:34:50.384 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:50 vm01 ceph-mon[50888]: mgrmap e12: vm01.itvfys(active, since 2s) 2026-03-10T09:34:50.385 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:50 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2648215252' entity='client.admin' 2026-03-10T09:34:50.385 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:50 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm01", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm01", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T09:34:51.611 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:51 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:34:52.328 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-10T09:34:52.493 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph orch host ls --format=json 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.client.admin.keyring 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Deploying daemon ceph-exporter.vm01 on vm01 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Deploying cephadm binary to vm08 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm01", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm01", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Deploying daemon crash.vm01 on vm01 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:52.532 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:52 vm01 ceph-mon[50888]: Added host vm08 2026-03-10T09:34:52.801 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:34:53.049 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:34:53.049 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-10T09:34:53.110 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T09:34:53.110 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd crush tunables default 2026-03-10T09:34:53.394 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: Deploying daemon node-exporter.vm01 on vm01 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: from='client.14189 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: mgrmap e13: vm01.itvfys(active, since 6s) 2026-03-10T09:34:54.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:53 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3584265777' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T09:34:54.331 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-10T09:34:54.396 INFO:tasks.cephadm:Adding mon.vm01 on vm01 2026-03-10T09:34:54.396 INFO:tasks.cephadm:Adding mon.vm08 on vm08 2026-03-10T09:34:54.396 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph orch apply mon '2;vm01:192.168.123.101=vm01;vm08:192.168.123.108=vm08' 2026-03-10T09:34:54.547 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:54.578 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:54.802 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-10T09:34:54.868 DEBUG:teuthology.orchestra.run.vm08:mon.vm08> sudo journalctl -f -n 0 -u ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm08.service 2026-03-10T09:34:54.870 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:34:54.870 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:34:55.056 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:55.088 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:55.324 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:34:55.325 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:34:55.325 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:34:55.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:55 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3584265777' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T09:34:55.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:55 vm01 ceph-mon[50888]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T09:34:55.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:55 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:55.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:55 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1584328173' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:34:56.399 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:34:56.400 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:34:56.547 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:56.579 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:56.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:56 vm01 ceph-mon[50888]: from='client.14193 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm01:192.168.123.101=vm01;vm08:192.168.123.108=vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:34:56.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:56 vm01 ceph-mon[50888]: Saving service mon spec with placement vm01:192.168.123.101=vm01;vm08:192.168.123.108=vm08;count:2 2026-03-10T09:34:56.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:56 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:56.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:56 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:56.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:56 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:56.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:56 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:56.808 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:34:56.808 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:34:56.808 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:34:57.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:57 vm01 ceph-mon[50888]: Deploying daemon alertmanager.vm01 on vm01 2026-03-10T09:34:57.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:57 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:57.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:57 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1517068211' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:34:57.866 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:34:57.866 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:34:58.017 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:58.050 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:58.299 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:34:58.299 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:34:58.299 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:34:58.339 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:58 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/428232341' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:34:59.344 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:34:59.344 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:34:59.492 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:59.526 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:34:59.755 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:34:59.755 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:34:59.756 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:34:59.905 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:34:59 vm01 ceph-mon[50888]: Deploying daemon grafana.vm01 on vm01 2026-03-10T09:35:00.811 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:00.811 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:00.965 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:00.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:00 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1187314593' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:00.997 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:01.238 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:01.238 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:01.238 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:01.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:01 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2370007780' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:01.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:01 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:02.297 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:02.297 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:02.451 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:02.484 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:02.782 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:02.782 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:02.782 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:03.228 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:02 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1891319008' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:03.827 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:03.827 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:03.978 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:04.016 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:04.274 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:04.274 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:04.274 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:04.443 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:04 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3183763413' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:05.343 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:05.343 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:05.499 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:05.536 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:05.780 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:05.780 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:05.780 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:05.829 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:05 vm01 ceph-mon[50888]: Deploying daemon prometheus.vm01 on vm01 2026-03-10T09:35:06.838 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:06.838 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:06.993 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:07.026 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:07.045 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:06 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3327461872' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:07.045 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:06 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:07.270 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:07.270 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:07.270 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:08.318 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:08.318 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:08.473 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:08.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:08 vm01 ceph-mon[50888]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:08.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:08 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/531143543' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:08.507 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:08.745 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:08.745 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:08.745 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:09.304 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:09 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3613483657' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:09.795 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:09.795 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:09.948 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:09.982 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:10.216 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:10.216 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:10.216 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:10.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:10 vm01 ceph-mon[50888]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:10.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:10 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:10.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:10 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:10.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:10 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' 2026-03-10T09:35:10.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:10 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T09:35:11.273 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:11.273 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:11.406 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:11 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1113061556' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:11.406 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:11 vm01 ceph-mon[50888]: from='mgr.14162 192.168.123.101:0/2297001413' entity='mgr.vm01.itvfys' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T09:35:11.406 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:11 vm01 ceph-mon[50888]: mgrmap e14: vm01.itvfys(active, since 24s) 2026-03-10T09:35:11.427 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:11.460 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:11.709 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:11.709 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:11.709 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:12.309 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:12 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3228240535' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:12.773 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:12.773 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:12.931 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:12.964 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:13.206 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:13.207 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:13.207 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:13.511 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2317745781' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:14.271 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:14.271 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:14.425 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:14.460 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: Active manager daemon vm01.itvfys restarted 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: Activating manager daemon vm01.itvfys 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: mgrmap e15: vm01.itvfys(active, starting, since 0.0044727s) 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr metadata", "who": "vm01.itvfys", "id": "vm01.itvfys"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: Manager daemon vm01.itvfys is now available 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/mirror_snapshot_schedule"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/vm01.itvfys/trash_purge_schedule"}]: dispatch 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:14.524 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:14.700 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:14.700 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:14.700 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:15.741 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:15.741 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:15.911 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:15.953 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /etc/ceph/ceph.conf 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: mgrmap e16: vm01.itvfys(active, since 1.00722s) 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: [10/Mar/2026:09:35:14] ENGINE Bus STARTING 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: [10/Mar/2026:09:35:14] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/689033023' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:15.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:15 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:35:16.213 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:16.213 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:16.213 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: [10/Mar/2026:09:35:14] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: [10/Mar/2026:09:35:14] ENGINE Bus STARTED 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: [10/Mar/2026:09:35:14] ENGINE Client ('192.168.123.101', 36618) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2772847119' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: mgrmap e17: vm01.itvfys(active, since 2s) 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:16.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:16 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:35:17.279 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:17.280 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:17.570 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:17.935 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:17.935 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:17.935 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:17.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:35:17.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:35:17.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: Updating vm08:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:17.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:17.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm08", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm08", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T09:35:17.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:17 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:18.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: Updating vm08:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.client.admin.keyring 2026-03-10T09:35:18.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.client.admin.keyring 2026-03-10T09:35:18.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: Deploying daemon ceph-exporter.vm08 on vm08 2026-03-10T09:35:18.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2338006956' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm08", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm08", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T09:35:18.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:18 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:19.014 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:19.014 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:19.234 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:19.486 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:19.486 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:19.486 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: Deploying daemon crash.vm08 on vm08 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:19.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:19 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3948999572' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:20.556 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:20.556 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:20.712 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:20.976 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:20.976 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:20.976 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:20.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:20 vm01 ceph-mon[50888]: Deploying daemon node-exporter.vm08 on vm08 2026-03-10T09:35:21.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1347640754' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm08.pllkti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm08.pllkti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:35:21.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:22.034 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:22.034 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:22.353 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:22.796 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:22.796 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:34:23.279412Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T09:35:22.796 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: Deploying daemon mgr.vm08.pllkti on vm08 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:35:22.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:23.353 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm08", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm08", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]': finished 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Updating vm08:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.client.admin.keyring 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.client.admin.keyring 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Deploying daemon ceph-exporter.vm08 on vm08 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/2338006956' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm08", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.vm08", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Deploying daemon crash.vm08 on vm08 2026-03-10T09:35:23.354 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/3948999572' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Deploying daemon node-exporter.vm08 on vm08 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/1347640754' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm08.pllkti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.vm08.pllkti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: Deploying daemon mgr.vm08.pllkti on vm08 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:23.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:23 vm08 ceph-mon[58470]: mon.vm08@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3 2026-03-10T09:35:24.026 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T09:35:24.026 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mon dump -f json 2026-03-10T09:35:24.201 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm08/config 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: mon.vm01 calling monitor election 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm08.pllkti/crt"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: mon.vm08 calling monitor election 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: mon.vm01 is new leader, mons vm01,vm08 in quorum (ranks 0,1) 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: monmap epoch 2 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: fsid 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: last_changed 2026-03-10T09:35:23.367083+0000 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: created 2026-03-10T09:34:23.279412+0000 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: min_mon_release 19 (squid) 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: election_strategy: 1 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.vm01 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.vm08 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: fsmap 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: mgrmap e17: vm01.itvfys(active, since 14s) 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: overall HEALTH_OK 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: Standby manager daemon vm08.pllkti started 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm08.pllkti/key"}]: dispatch 2026-03-10T09:35:28.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:28 vm01 ceph-mon[50888]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: mon.vm01 calling monitor election 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm08.pllkti/crt"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: mon.vm08 calling monitor election 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: mon.vm01 is new leader, mons vm01,vm08 in quorum (ranks 0,1) 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: monmap epoch 2 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: fsid 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: last_changed 2026-03-10T09:35:23.367083+0000 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: created 2026-03-10T09:34:23.279412+0000 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: min_mon_release 19 (squid) 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: election_strategy: 1 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.vm01 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.vm08 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: fsmap 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: osdmap e5: 0 total, 0 up, 0 in 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: mgrmap e17: vm01.itvfys(active, since 14s) 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: overall HEALTH_OK 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: Standby manager daemon vm08.pllkti started 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/vm08.pllkti/key"}]: dispatch 2026-03-10T09:35:28.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:28 vm08 ceph-mon[58470]: from='mgr.? 192.168.123.108:0/2970468074' entity='mgr.vm08.pllkti' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T09:35:28.860 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T09:35:28.860 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":2,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","modified":"2026-03-10T09:35:23.367083Z","created":"2026-03-10T09:34:23.279412Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"vm01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"vm08","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T09:35:28.860 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 2 2026-03-10T09:35:28.917 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T09:35:28.917 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph config generate-minimal-conf 2026-03-10T09:35:29.107 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:29.353 INFO:teuthology.orchestra.run.vm01.stdout:# minimal ceph.conf for 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:35:29.353 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-10T09:35:29.353 INFO:teuthology.orchestra.run.vm01.stdout: fsid = 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:35:29.353 INFO:teuthology.orchestra.run.vm01.stdout: mon_host = [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T09:35:29.421 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T09:35:29.421 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:35:29.421 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T09:35:29.448 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:35:29.449 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:35:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: mgrmap e18: vm01.itvfys(active, since 14s), standbys: vm08.pllkti 2026-03-10T09:35:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr metadata", "who": "vm08.pllkti", "id": "vm08.pllkti"}]: dispatch 2026-03-10T09:35:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1783195646' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.503 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:35:29.504 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T09:35:29.530 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:35:29.530 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T09:35:29.594 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T09:35:29.595 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:35:29.595 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T09:35:29.615 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:35:29.615 DEBUG:teuthology.orchestra.run.vm01:> ls /dev/[sv]d? 2026-03-10T09:35:29.672 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vda 2026-03-10T09:35:29.672 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdb 2026-03-10T09:35:29.672 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdc 2026-03-10T09:35:29.672 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdd 2026-03-10T09:35:29.672 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vde 2026-03-10T09:35:29.673 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T09:35:29.673 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T09:35:29.673 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdb 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4218425179' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:29.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:35:29.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:35:29.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:29 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdb 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 221 Links: 1 Device type: fc,10 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:34:50.394997145 +0000 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:33:34.236538158 +0000 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:33:34.236538158 +0000 2026-03-10T09:35:29.737 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:29:47.257000000 +0000 2026-03-10T09:35:29.737 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T09:35:29.816 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:35:29.816 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:35:29.816 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000158708 s, 3.2 MB/s 2026-03-10T09:35:29.818 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: mgrmap e18: vm01.itvfys(active, since 14s), standbys: vm08.pllkti 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr metadata", "who": "vm08.pllkti", "id": "vm08.pllkti"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/1783195646' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4218425179' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:35:29.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:29 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:29.896 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdc 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdc 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 222 Links: 1 Device type: fc,20 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:34:50.427997185 +0000 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:33:34.231538153 +0000 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:33:34.231538153 +0000 2026-03-10T09:35:29.954 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:29:47.259000000 +0000 2026-03-10T09:35:29.954 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T09:35:30.029 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:35:30.029 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:35:30.029 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000120244 s, 4.3 MB/s 2026-03-10T09:35:30.029 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T09:35:30.100 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdd 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdd 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 225 Links: 1 Device type: fc,30 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:34:50.457997222 +0000 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:33:34.235538157 +0000 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:33:34.235538157 +0000 2026-03-10T09:35:30.159 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:29:47.270000000 +0000 2026-03-10T09:35:30.159 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T09:35:30.222 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:35:30.222 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:35:30.222 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000206197 s, 2.5 MB/s 2026-03-10T09:35:30.223 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T09:35:30.295 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vde 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vde 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 09:34:50.483997253 +0000 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 09:33:34.230538152 +0000 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 09:33:34.230538152 +0000 2026-03-10T09:35:30.367 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 09:29:47.343000000 +0000 2026-03-10T09:35:30.367 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T09:35:30.439 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T09:35:30.439 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T09:35:30.439 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000120215 s, 4.3 MB/s 2026-03-10T09:35:30.440 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T09:35:30.512 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:35:30.512 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T09:35:30.527 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:35:30.528 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-10T09:35:30.583 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-10T09:35:30.584 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-10T09:35:30.584 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-10T09:35:30.584 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-10T09:35:30.584 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-10T09:35:30.584 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T09:35:30.584 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T09:35:30.584 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:35:16.216001719 +0000 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:33:34.874455061 +0000 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:33:34.874455061 +0000 2026-03-10T09:35:30.640 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:30:17.244000000 +0000 2026-03-10T09:35:30.641 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T09:35:30.702 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:35:30.703 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:35:30.703 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000182021 s, 2.8 MB/s 2026-03-10T09:35:30.704 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T09:35:30.760 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-10T09:35:30.816 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-10T09:35:30.816 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:35:16.246001745 +0000 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:33:34.875455062 +0000 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:33:34.875455062 +0000 2026-03-10T09:35:30.817 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:30:17.249000000 +0000 2026-03-10T09:35:30.817 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Updating vm08:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Reconfiguring mon.vm01 (unknown last config time)... 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Reconfiguring daemon mon.vm01 on vm01 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Reconfiguring mgr.vm01.itvfys (unknown last config time)... 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm01.itvfys", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: Reconfiguring daemon mgr.vm01.itvfys on vm01 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm01", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm01", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.823 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:30 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:30.878 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:35:30.878 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:35:30.878 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000164258 s, 3.1 MB/s 2026-03-10T09:35:30.879 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T09:35:30.936 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:35:16.283001777 +0000 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:33:34.902455083 +0000 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:33:34.902455083 +0000 2026-03-10T09:35:30.994 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:30:17.254000000 +0000 2026-03-10T09:35:30.994 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Updating vm08:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Updating vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/config/ceph.conf 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Reconfiguring mon.vm01 (unknown last config time)... 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Reconfiguring daemon mon.vm01 on vm01 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Reconfiguring mgr.vm01.itvfys (unknown last config time)... 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm01.itvfys", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: Reconfiguring daemon mgr.vm01.itvfys on vm01 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm01", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm01", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.055 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:30 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.057 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:35:31.057 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:35:31.057 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000137618 s, 3.7 MB/s 2026-03-10T09:35:31.058 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T09:35:31.114 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-10T09:35:31.170 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 09:35:16.305001796 +0000 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 09:33:34.915455093 +0000 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 09:33:34.915455093 +0000 2026-03-10T09:35:31.171 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 09:30:17.258000000 +0000 2026-03-10T09:35:31.171 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T09:35:31.233 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T09:35:31.234 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T09:35:31.234 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000169797 s, 3.0 MB/s 2026-03-10T09:35:31.234 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T09:35:31.293 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph orch apply osd --all-available-devices 2026-03-10T09:35:31.502 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm08/config 2026-03-10T09:35:31.734 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled osd.all-available-devices update... 2026-03-10T09:35:31.797 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T09:35:31.797 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:31.920 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: Reconfiguring ceph-exporter.vm01 (monmap changed)... 2026-03-10T09:35:31.920 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: Reconfiguring daemon ceph-exporter.vm01 on vm01 2026-03-10T09:35:31.920 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: Reconfiguring crash.vm01 (monmap changed)... 2026-03-10T09:35:31.920 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: Reconfiguring daemon crash.vm01 on vm01 2026-03-10T09:35:31.920 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: Reconfiguring alertmanager.vm01 (dependencies changed)... 2026-03-10T09:35:31.920 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: Reconfiguring daemon alertmanager.vm01 on vm01 2026-03-10T09:35:31.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:31.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:31 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.084 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:32.085 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: Reconfiguring ceph-exporter.vm01 (monmap changed)... 2026-03-10T09:35:32.085 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: Reconfiguring daemon ceph-exporter.vm01 on vm01 2026-03-10T09:35:32.085 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: Reconfiguring crash.vm01 (monmap changed)... 2026-03-10T09:35:32.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: Reconfiguring daemon crash.vm01 on vm01 2026-03-10T09:35:32.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: Reconfiguring alertmanager.vm01 (dependencies changed)... 2026-03-10T09:35:32.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: Reconfiguring daemon alertmanager.vm01 on vm01 2026-03-10T09:35:32.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:31 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.426 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:32.486 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T09:35:32.681 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:32 vm01 ceph-mon[50888]: Reconfiguring grafana.vm01 (dependencies changed)... 2026-03-10T09:35:32.682 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:32 vm01 ceph-mon[50888]: Reconfiguring daemon grafana.vm01 on vm01 2026-03-10T09:35:32.682 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.682 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.682 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.682 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:32 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/907693658' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:32.910 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:32 vm08 ceph-mon[58470]: Reconfiguring grafana.vm01 (dependencies changed)... 2026-03-10T09:35:32.910 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:32 vm08 ceph-mon[58470]: Reconfiguring daemon grafana.vm01 on vm01 2026-03-10T09:35:32.910 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.910 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.910 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:32.910 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:32 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/907693658' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:33.487 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:33.659 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: Reconfiguring prometheus.vm01 (dependencies changed)... 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: Marking host: vm01 for OSDSpec preview refresh. 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: Marking host: vm08 for OSDSpec preview refresh. 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: Saving service osd.all-available-devices spec with placement * 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: Reconfiguring daemon prometheus.vm01 on vm01 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm08", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm08", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm08.pllkti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:35:33.748 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.749 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.749 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.749 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:35:33.749 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:35:33.749 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.778 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch apply osd", "all_available_devices": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: Reconfiguring prometheus.vm01 (dependencies changed)... 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: Marking host: vm01 for OSDSpec preview refresh. 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: Marking host: vm08 for OSDSpec preview refresh. 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: Saving service osd.all-available-devices spec with placement * 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: Reconfiguring daemon prometheus.vm01 on vm01 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.ceph-exporter.vm08", "caps": ["mon", "profile ceph-exporter", "mon", "allow r", "mgr", "allow r", "osd", "allow r"]}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.vm08", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.vm08.pllkti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T09:35:33.779 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:33.905 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:33.969 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring ceph-exporter.vm08 (monmap changed)... 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring daemon ceph-exporter.vm08 on vm08 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring crash.vm08 (monmap changed)... 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring daemon crash.vm08 on vm08 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring mgr.vm08.pllkti (monmap changed)... 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring daemon mgr.vm08.pllkti on vm08 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring mon.vm08 (monmap changed)... 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: Reconfiguring daemon mon.vm08 on vm08 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-10T09:35:34.664 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm01.local:3000"}]: dispatch 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm01.local:9095"}]: dispatch 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/940160101' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.665 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:34.970 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring ceph-exporter.vm08 (monmap changed)... 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring daemon ceph-exporter.vm08 on vm08 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring crash.vm08 (monmap changed)... 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring daemon crash.vm08 on vm08 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring mgr.vm08.pllkti (monmap changed)... 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring daemon mgr.vm08.pllkti on vm08 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring mon.vm08 (monmap changed)... 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: Reconfiguring daemon mon.vm08 on vm08 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm01.local:3000"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm01.local:9095"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/940160101' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.068 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.201 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:35.449 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:35.510 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0} 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm01.local:3000"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm01.local:9095"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:35.667 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:35:35.668 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:35:35.668 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:35.668 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:35:35.668 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:35.668 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:35 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4260445352' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm01.local:9093"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm01.local:3000"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm01.local:9095"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:36.003 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:35 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4260445352' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:36.511 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:36.669 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:36.673 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2176943534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b112f42b-2b1e-413f-b116-f865f94b0c29"}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b112f42b-2b1e-413f-b116-f865f94b0c29"}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b112f42b-2b1e-413f-b116-f865f94b0c29"}]': finished 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2573819537' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "de2dce78-ca33-4b1c-9e69-a1a6c779ba19"}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2573819537' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "de2dce78-ca33-4b1c-9e69-a1a6c779ba19"}]': finished 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/1740690140' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:36.714 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:36 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3474658398' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:36.898 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:36.951 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773135336,"num_remapped_pgs":0} 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/2176943534' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b112f42b-2b1e-413f-b116-f865f94b0c29"}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b112f42b-2b1e-413f-b116-f865f94b0c29"}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b112f42b-2b1e-413f-b116-f865f94b0c29"}]': finished 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2573819537' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "de2dce78-ca33-4b1c-9e69-a1a6c779ba19"}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2573819537' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "de2dce78-ca33-4b1c-9e69-a1a6c779ba19"}]': finished 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: osdmap e7: 2 total, 0 up, 2 in 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/1740690140' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:37.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:36 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3474658398' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:37.952 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:37.973 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:37 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3249661005' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:38.052 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:37 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3249661005' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:38.120 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:38.330 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:38.390 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1773135336,"num_remapped_pgs":0} 2026-03-10T09:35:39.391 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:39.396 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:39 vm08 ceph-mon[58470]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:39.396 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:39 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3706597186' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:39.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:39 vm01 ceph-mon[50888]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:39.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:39 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3706597186' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:39.603 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:39.844 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:39.897 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773135339,"num_remapped_pgs":0} 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3974804073' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "52923810-6aca-4cac-8f49-9dce6a88ce87"}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "52923810-6aca-4cac-8f49-9dce6a88ce87"}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "52923810-6aca-4cac-8f49-9dce6a88ce87"}]': finished 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: osdmap e8: 3 total, 0 up, 3 in 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1828069402' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "42839f5e-af27-4202-b7cc-68e318d52cee"}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "42839f5e-af27-4202-b7cc-68e318d52cee"}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "42839f5e-af27-4202-b7cc-68e318d52cee"}]': finished 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: osdmap e9: 4 total, 0 up, 4 in 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2304862891' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/863750360' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:40.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:40 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2269939515' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/3974804073' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "52923810-6aca-4cac-8f49-9dce6a88ce87"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "52923810-6aca-4cac-8f49-9dce6a88ce87"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "52923810-6aca-4cac-8f49-9dce6a88ce87"}]': finished 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: osdmap e8: 3 total, 0 up, 3 in 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1828069402' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "42839f5e-af27-4202-b7cc-68e318d52cee"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "42839f5e-af27-4202-b7cc-68e318d52cee"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "42839f5e-af27-4202-b7cc-68e318d52cee"}]': finished 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: osdmap e9: 4 total, 0 up, 4 in 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/2304862891' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/863750360' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:40.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:40 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2269939515' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:40.898 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:41.073 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:41.302 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:41.363 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"num_osds":4,"num_up_osds":0,"osd_up_since":0,"num_in_osds":4,"osd_in_since":1773135339,"num_remapped_pgs":0} 2026-03-10T09:35:41.462 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:41 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/74615099' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:41.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:41 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/74615099' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:42.364 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:42.525 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:42.542 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:42 vm08 ceph-mon[58470]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:42.640 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:42 vm01 ceph-mon[50888]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:42.747 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:42.790 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":10,"num_osds":5,"num_up_osds":0,"osd_up_since":0,"num_in_osds":5,"osd_in_since":1773135342,"num_remapped_pgs":0} 2026-03-10T09:35:43.469 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/4249078823' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "23dcd582-f844-4138-a1e1-2aa2b2311bc8"}]: dispatch 2026-03-10T09:35:43.469 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "23dcd582-f844-4138-a1e1-2aa2b2311bc8"}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "23dcd582-f844-4138-a1e1-2aa2b2311bc8"}]': finished 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: osdmap e10: 5 total, 0 up, 5 in 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3918949964' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2930303965' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4100713625' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2cbd4429-08c1-4c36-bb18-3b48502e7ba6"}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4100713625' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2cbd4429-08c1-4c36-bb18-3b48502e7ba6"}]': finished 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: osdmap e11: 6 total, 0 up, 6 in 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:43.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:43 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:43.791 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/4249078823' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "23dcd582-f844-4138-a1e1-2aa2b2311bc8"}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "23dcd582-f844-4138-a1e1-2aa2b2311bc8"}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "23dcd582-f844-4138-a1e1-2aa2b2311bc8"}]': finished 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: osdmap e10: 5 total, 0 up, 5 in 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3918949964' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/2930303965' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4100713625' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2cbd4429-08c1-4c36-bb18-3b48502e7ba6"}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4100713625' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2cbd4429-08c1-4c36-bb18-3b48502e7ba6"}]': finished 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: osdmap e11: 6 total, 0 up, 6 in 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:43.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:43 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:43.954 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:44.190 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:44.234 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773135343,"num_remapped_pgs":0} 2026-03-10T09:35:44.473 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:44 vm01 ceph-mon[50888]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:44.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:44 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:44.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:44 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4229517844' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:44.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:44 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1226662534' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:44.835 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:44 vm08 ceph-mon[58470]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:44.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:44 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:44.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:44 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4229517844' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:44.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:44 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1226662534' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:45.235 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:45.392 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:45.607 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:45.648 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":11,"num_osds":6,"num_up_osds":0,"osd_up_since":0,"num_in_osds":6,"osd_in_since":1773135343,"num_remapped_pgs":0} 2026-03-10T09:35:46.648 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:46.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3931821899' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2852586707' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d5b5f7a-3dec-4b69-bb94-bd45de84c36c"}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d5b5f7a-3dec-4b69-bb94-bd45de84c36c"}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d5b5f7a-3dec-4b69-bb94-bd45de84c36c"}]': finished 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: osdmap e12: 7 total, 0 up, 7 in 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:46.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:46 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:46.774 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3931821899' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/2852586707' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d5b5f7a-3dec-4b69-bb94-bd45de84c36c"}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d5b5f7a-3dec-4b69-bb94-bd45de84c36c"}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d5b5f7a-3dec-4b69-bb94-bd45de84c36c"}]': finished 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: osdmap e12: 7 total, 0 up, 7 in 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:46.775 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:46 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:46.836 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:47.072 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:47.131 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:47.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/2940533964' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:47.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1493724574' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "117b0fe8-68ff-45e8-b71b-20c4438c4bbe"}]: dispatch 2026-03-10T09:35:47.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1493724574' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "117b0fe8-68ff-45e8-b71b-20c4438c4bbe"}]': finished 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: osdmap e13: 8 total, 0 up, 8 in 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3830232260' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:47.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:47 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3890224311' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/2940533964' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1493724574' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "117b0fe8-68ff-45e8-b71b-20c4438c4bbe"}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1493724574' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "117b0fe8-68ff-45e8-b71b-20c4438c4bbe"}]': finished 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: osdmap e13: 8 total, 0 up, 8 in 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3830232260' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:47.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:47 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3890224311' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:35:48.132 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:48.289 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:48.510 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:48.585 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:48.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:48 vm08 ceph-mon[58470]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:48.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:48 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2711705251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:48.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:48 vm01 ceph-mon[50888]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:48.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:48 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2711705251' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:49.585 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:49.749 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:49.966 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:50.031 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:50.587 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:50 vm08 ceph-mon[58470]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:50.587 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:50 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/730365907' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:50.587 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:50 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T09:35:50.587 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:50 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:50.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:50 vm01 ceph-mon[50888]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:50.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:50 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/730365907' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:50.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:50 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T09:35:50.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:50 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:51.032 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:51.260 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:51.518 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:51.612 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:51.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:51 vm08 ceph-mon[58470]: Deploying daemon osd.0 on vm08 2026-03-10T09:35:51.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:51 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:35:51.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:51 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:51.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:51 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3632473809' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:51.855 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:51 vm01 ceph-mon[50888]: Deploying daemon osd.0 on vm08 2026-03-10T09:35:51.855 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:51 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:35:51.855 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:51 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:51.855 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:51 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3632473809' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:52.612 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:52.825 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:52.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:52 vm01 ceph-mon[50888]: Deploying daemon osd.1 on vm01 2026-03-10T09:35:52.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:52 vm01 ceph-mon[50888]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:52.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:52 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:52.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:52 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:52.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:52 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T09:35:52.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:52 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:52.872 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:52 vm08 ceph-mon[58470]: Deploying daemon osd.1 on vm01 2026-03-10T09:35:52.872 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:52 vm08 ceph-mon[58470]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:52.872 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:52 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:52.872 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:52 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:52.872 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:52 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T09:35:52.872 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:52 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:53.123 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:53.189 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:53.652 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:53 vm01 ceph-mon[50888]: Deploying daemon osd.2 on vm08 2026-03-10T09:35:53.652 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:53 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:53.652 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:53 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:53.652 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:53 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T09:35:53.652 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:53 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:53.652 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:53 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2391783646' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:53.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:53 vm08 ceph-mon[58470]: Deploying daemon osd.2 on vm08 2026-03-10T09:35:53.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:53 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:53.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:53 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:53.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:53 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T09:35:53.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:53 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:53.837 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:53 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2391783646' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:54.190 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:54.430 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:54.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:54 vm01 ceph-mon[50888]: Deploying daemon osd.3 on vm01 2026-03-10T09:35:54.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:54 vm01 ceph-mon[50888]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:54.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:54 vm01 ceph-mon[50888]: from='osd.0 [v2:192.168.123.108:6800/1166711813,v1:192.168.123.108:6801/1166711813]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:35:54.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:54 vm01 ceph-mon[50888]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:35:54.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:54 vm01 ceph-mon[50888]: from='osd.1 [v2:192.168.123.101:6802/3603837159,v1:192.168.123.101:6803/3603837159]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:35:54.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:54 vm01 ceph-mon[50888]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:35:54.753 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:54.817 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":14,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:54.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:54 vm08 ceph-mon[58470]: Deploying daemon osd.3 on vm01 2026-03-10T09:35:54.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:54 vm08 ceph-mon[58470]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:54.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:54 vm08 ceph-mon[58470]: from='osd.0 [v2:192.168.123.108:6800/1166711813,v1:192.168.123.108:6801/1166711813]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:35:54.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:54 vm08 ceph-mon[58470]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T09:35:54.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:54 vm08 ceph-mon[58470]: from='osd.1 [v2:192.168.123.101:6802/3603837159,v1:192.168.123.101:6803/3603837159]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:35:54.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:54 vm08 ceph-mon[58470]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T09:35:55.819 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: osdmap e14: 8 total, 0 up, 8 in 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='osd.1 [v2:192.168.123.101:6802/3603837159,v1:192.168.123.101:6803/3603837159]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='osd.0 [v2:192.168.123.108:6800/1166711813,v1:192.168.123.108:6801/1166711813]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: Deploying daemon osd.4 on vm08 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3263605389' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T09:35:55.921 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:55 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: osdmap e14: 8 total, 0 up, 8 in 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:56.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='osd.1 [v2:192.168.123.101:6802/3603837159,v1:192.168.123.101:6803/3603837159]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='osd.0 [v2:192.168.123.108:6800/1166711813,v1:192.168.123.108:6801/1166711813]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: Deploying daemon osd.4 on vm08 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3263605389' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T09:35:56.088 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:55 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:56.096 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:56.358 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:56.468 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":15,"num_osds":8,"num_up_osds":0,"osd_up_since":0,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: Deploying daemon osd.5 on vm01 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: osdmap e15: 8 total, 0 up, 8 in 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:56.715 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:56 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/143786127' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: Deploying daemon osd.5 on vm01 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: osdmap e15: 8 total, 0 up, 8 in 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:56.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:56 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/143786127' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:57.469 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:57.669 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: osd.0 [v2:192.168.123.108:6800/1166711813,v1:192.168.123.108:6801/1166711813] boot 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: osd.1 [v2:192.168.123.101:6802/3603837159,v1:192.168.123.101:6803/3603837159] boot 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: osdmap e16: 8 total, 2 up, 8 in 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:57.786 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T09:35:57.787 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:57.787 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:35:57.787 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:57 vm08 ceph-mon[58470]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: osd.0 [v2:192.168.123.108:6800/1166711813,v1:192.168.123.108:6801/1166711813] boot 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: osd.1 [v2:192.168.123.101:6802/3603837159,v1:192.168.123.101:6803/3603837159] boot 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: osdmap e16: 8 total, 2 up, 8 in 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T09:35:57.876 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:57 vm01 ceph-mon[50888]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T09:35:57.984 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:58.037 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":17,"num_osds":8,"num_up_osds":2,"osd_up_since":1773135356,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: Deploying daemon osd.6 on vm08 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: osdmap e17: 8 total, 2 up, 8 in 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2652500614' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='osd.4 [v2:192.168.123.108:6816/3375933889,v1:192.168.123.108:6817/3375933889]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T09:35:58.742 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:58 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:59.038 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: Deploying daemon osd.6 on vm08 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: osdmap e17: 8 total, 2 up, 8 in 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2652500614' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='osd.4 [v2:192.168.123.108:6816/3375933889,v1:192.168.123.108:6817/3375933889]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T09:35:59.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:58 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:35:59.242 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:35:59.612 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:35:59.711 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":18,"num_osds":8,"num_up_osds":2,"osd_up_since":1773135356,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:35:59.794 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: Deploying daemon osd.7 on vm01 2026-03-10T09:35:59.794 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: osdmap e18: 8 total, 2 up, 8 in 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='osd.4 [v2:192.168.123.108:6816/3375933889,v1:192.168.123.108:6817/3375933889]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:59.795 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:35:59 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3479479806' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: Deploying daemon osd.7 on vm01 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557]' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: osdmap e18: 8 total, 2 up, 8 in 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='osd.4 [v2:192.168.123.108:6816/3375933889,v1:192.168.123.108:6817/3375933889]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:35:59.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:35:59 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3479479806' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:00.712 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557] boot 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: osdmap e19: 8 total, 3 up, 8 in 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.6 [v2:192.168.123.108:6824/4197225113,v1:192.168.123.108:6825/4197225113]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T09:36:00.915 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T09:36:00.916 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:00.916 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:00 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:00.984 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: pgmap v29: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: osd.3 [v2:192.168.123.101:6810/3013306557,v1:192.168.123.101:6811/3013306557] boot 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: osdmap e19: 8 total, 3 up, 8 in 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724]' entity='osd.2' 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:01.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.6 [v2:192.168.123.108:6824/4197225113,v1:192.168.123.108:6825/4197225113]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T09:36:01.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T09:36:01.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:01.087 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:00 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:01.339 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:01.404 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":20,"num_osds":8,"num_up_osds":5,"osd_up_since":1773135360,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='osd.4 ' entity='osd.4' 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='osd.6 [v2:192.168.123.108:6824/4197225113,v1:192.168.123.108:6825/4197225113]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724] boot 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: osd.4 [v2:192.168.123.108:6816/3375933889,v1:192.168.123.108:6817/3375933889] boot 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: osdmap e20: 8 total, 5 up, 8 in 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1487407469' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: pgmap v32: 0 pgs: ; 0 B data, 413 MiB used, 80 GiB / 80 GiB avail 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:01.696 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:01 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:36:01.999 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='osd.4 ' entity='osd.4' 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266]' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='osd.6 [v2:192.168.123.108:6824/4197225113,v1:192.168.123.108:6825/4197225113]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: osd.2 [v2:192.168.123.108:6808/1751226724,v1:192.168.123.108:6809/1751226724] boot 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: osd.4 [v2:192.168.123.108:6816/3375933889,v1:192.168.123.108:6817/3375933889] boot 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: osdmap e20: 8 total, 5 up, 8 in 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1487407469' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: pgmap v32: 0 pgs: ; 0 B data, 413 MiB used, 80 GiB / 80 GiB avail 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:02.000 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:01 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:36:02.405 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:36:02.617 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:02.793 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:36:02.793 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266] boot 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: osdmap e21: 8 total, 6 up, 8 in 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='osd.7 [v2:192.168.123.101:6826/505627033,v1:192.168.123.101:6827/505627033]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:02.799 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:02 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:02.887 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:02.945 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":22,"num_osds":8,"num_up_osds":7,"osd_up_since":1773135362,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: osd.5 [v2:192.168.123.101:6818/3174081266,v1:192.168.123.101:6819/3174081266] boot 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: osdmap e21: 8 total, 6 up, 8 in 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='osd.7 [v2:192.168.123.101:6826/505627033,v1:192.168.123.101:6827/505627033]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:03.057 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:02 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:03.947 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: osd.6 [v2:192.168.123.108:6824/4197225113,v1:192.168.123.108:6825/4197225113] boot 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: osdmap e22: 8 total, 7 up, 8 in 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='osd.7 [v2:192.168.123.101:6826/505627033,v1:192.168.123.101:6827/505627033]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/385059467' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: Detected new or changed devices on vm08 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:36:04.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:03 vm08 ceph-mon[58470]: pgmap v35: 1 pgs: 1 creating+peering; 0 B data, 492 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:04.139 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: osd.6 [v2:192.168.123.108:6824/4197225113,v1:192.168.123.108:6825/4197225113] boot 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: osdmap e22: 8 total, 7 up, 8 in 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='osd.7 [v2:192.168.123.101:6826/505627033,v1:192.168.123.101:6827/505627033]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/385059467' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: Detected new or changed devices on vm08 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:36:04.166 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:03 vm01 ceph-mon[50888]: pgmap v35: 1 pgs: 1 creating+peering; 0 B data, 492 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:04.379 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:04.467 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":23,"num_osds":8,"num_up_osds":7,"osd_up_since":1773135362,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:36:04.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 sudo[77832]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T09:36:04.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 sudo[77832]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:36:04.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 sudo[77832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:36:04.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 sudo[77832]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:36:04.810 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 sudo[70983]: ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda 2026-03-10T09:36:04.811 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 sudo[70983]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory 2026-03-10T09:36:04.811 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 sudo[70983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167) 2026-03-10T09:36:04.811 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 sudo[70983]: pam_unix(sudo:session): session closed for user root 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: osdmap e23: 8 total, 7 up, 8 in 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: Detected new or changed devices on vm01 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2837216187' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:36:05.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:04 vm08 ceph-mon[58470]: from='osd.7 ' entity='osd.7' 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: osdmap e23: 8 total, 7 up, 8 in 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: Detected new or changed devices on vm01 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2837216187' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm01"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "mon metadata", "id": "vm08"}]: dispatch 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T09:36:05.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:04 vm01 ceph-mon[50888]: from='osd.7 ' entity='osd.7' 2026-03-10T09:36:05.468 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd stat -f json 2026-03-10T09:36:05.629 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:05.837 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:05.883 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":24,"num_osds":8,"num_up_osds":8,"osd_up_since":1773135365,"num_in_osds":8,"osd_in_since":1773135346,"num_remapped_pgs":0} 2026-03-10T09:36:05.883 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd dump --format=json 2026-03-10T09:36:06.041 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: purged_snaps scrub starts 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: purged_snaps scrub ok 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: osd.7 [v2:192.168.123.101:6826/505627033,v1:192.168.123.101:6827/505627033] boot 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: osdmap e24: 8 total, 8 up, 8 in 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:06.062 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:05 vm01 ceph-mon[50888]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 492 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: purged_snaps scrub starts 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: purged_snaps scrub ok 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: osd.7 [v2:192.168.123.101:6826/505627033,v1:192.168.123.101:6827/505627033] boot 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: osdmap e24: 8 total, 8 up, 8 in 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T09:36:06.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:05 vm08 ceph-mon[58470]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 492 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:06.262 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:06.263 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":24,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","created":"2026-03-10T09:34:24.264503+0000","modified":"2026-03-10T09:36:05.165341+0000","last_up_change":"2026-03-10T09:36:05.165341+0000","last_in_change":"2026-03-10T09:35:46.986147+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":11,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:36:01.553893+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"b112f42b-2b1e-413f-b116-f865f94b0c29","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6801","nonce":1166711813}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6803","nonce":1166711813}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6807","nonce":1166711813}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6805","nonce":1166711813}]},"public_addr":"192.168.123.108:6801/1166711813","cluster_addr":"192.168.123.108:6803/1166711813","heartbeat_back_addr":"192.168.123.108:6807/1166711813","heartbeat_front_addr":"192.168.123.108:6805/1166711813","state":["exists","up"]},{"osd":1,"uuid":"de2dce78-ca33-4b1c-9e69-a1a6c779ba19","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6803","nonce":3603837159}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6805","nonce":3603837159}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6809","nonce":3603837159}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6807","nonce":3603837159}]},"public_addr":"192.168.123.101:6803/3603837159","cluster_addr":"192.168.123.101:6805/3603837159","heartbeat_back_addr":"192.168.123.101:6809/3603837159","heartbeat_front_addr":"192.168.123.101:6807/3603837159","state":["exists","up"]},{"osd":2,"uuid":"52923810-6aca-4cac-8f49-9dce6a88ce87","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6808","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6809","nonce":1751226724}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6810","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6811","nonce":1751226724}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6814","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6815","nonce":1751226724}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6812","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6813","nonce":1751226724}]},"public_addr":"192.168.123.108:6809/1751226724","cluster_addr":"192.168.123.108:6811/1751226724","heartbeat_back_addr":"192.168.123.108:6815/1751226724","heartbeat_front_addr":"192.168.123.108:6813/1751226724","state":["exists","up"]},{"osd":3,"uuid":"42839f5e-af27-4202-b7cc-68e318d52cee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6811","nonce":3013306557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6813","nonce":3013306557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6817","nonce":3013306557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6815","nonce":3013306557}]},"public_addr":"192.168.123.101:6811/3013306557","cluster_addr":"192.168.123.101:6813/3013306557","heartbeat_back_addr":"192.168.123.101:6817/3013306557","heartbeat_front_addr":"192.168.123.101:6815/3013306557","state":["exists","up"]},{"osd":4,"uuid":"23dcd582-f844-4138-a1e1-2aa2b2311bc8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6816","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6817","nonce":3375933889}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6818","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6819","nonce":3375933889}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6822","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6823","nonce":3375933889}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6820","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6821","nonce":3375933889}]},"public_addr":"192.168.123.108:6817/3375933889","cluster_addr":"192.168.123.108:6819/3375933889","heartbeat_back_addr":"192.168.123.108:6823/3375933889","heartbeat_front_addr":"192.168.123.108:6821/3375933889","state":["exists","up"]},{"osd":5,"uuid":"2cbd4429-08c1-4c36-bb18-3b48502e7ba6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6819","nonce":3174081266}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6821","nonce":3174081266}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6825","nonce":3174081266}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6823","nonce":3174081266}]},"public_addr":"192.168.123.101:6819/3174081266","cluster_addr":"192.168.123.101:6821/3174081266","heartbeat_back_addr":"192.168.123.101:6825/3174081266","heartbeat_front_addr":"192.168.123.101:6823/3174081266","state":["exists","up"]},{"osd":6,"uuid":"5d5b5f7a-3dec-4b69-bb94-bd45de84c36c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6824","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6825","nonce":4197225113}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6826","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6827","nonce":4197225113}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6830","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6831","nonce":4197225113}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6828","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6829","nonce":4197225113}]},"public_addr":"192.168.123.108:6825/4197225113","cluster_addr":"192.168.123.108:6827/4197225113","heartbeat_back_addr":"192.168.123.108:6831/4197225113","heartbeat_front_addr":"192.168.123.108:6829/4197225113","state":["exists","up"]},{"osd":7,"uuid":"117b0fe8-68ff-45e8-b71b-20c4438c4bbe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6826","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6827","nonce":505627033}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6828","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6829","nonce":505627033}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6832","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6833","nonce":505627033}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6830","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6831","nonce":505627033}]},"public_addr":"192.168.123.101:6827/505627033","cluster_addr":"192.168.123.101:6829/505627033","heartbeat_back_addr":"192.168.123.101:6833/505627033","heartbeat_front_addr":"192.168.123.101:6831/505627033","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:54.924104+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:55.448286+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:57.738298+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:58.446323+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:59.325176+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:00.675781+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:01.305702+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/237417781":"2026-03-11T09:35:13.514786+0000","192.168.123.101:0/3158952217":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/914506191":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/2020023795":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/1134624716":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6801/1217789111":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/969840687":"2026-03-11T09:34:46.390423+0000","192.168.123.101:0/4266255393":"2026-03-11T09:35:13.514786+0000","192.168.123.101:0/2622952359":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6800/2854054601":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6800/1217789111":"2026-03-11T09:34:34.933336+0000","192.168.123.101:6800/1826817318":"2026-03-11T09:35:13.514786+0000","192.168.123.101:6801/2854054601":"2026-03-11T09:34:46.390423+0000","192.168.123.101:0/3938110568":"2026-03-11T09:35:13.514786+0000","192.168.123.101:6801/1826817318":"2026-03-11T09:35:13.514786+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:36:06.323 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T09:36:01.553893+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '24', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T09:36:06.323 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd pool get .mgr pg_num 2026-03-10T09:36:06.479 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:06.678 INFO:teuthology.orchestra.run.vm01.stdout:pg_num: 1 2026-03-10T09:36:06.742 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T09:36:06.742 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T09:36:06.908 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:06.930 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:06 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2877567168' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:06.930 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:06 vm01 ceph-mon[50888]: mgrmap e19: vm01.itvfys(active, since 52s), standbys: vm08.pllkti 2026-03-10T09:36:06.931 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:06 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1854288556' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:36:06.931 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:06 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1265268202' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T09:36:07.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:06 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2877567168' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T09:36:07.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:06 vm08 ceph-mon[58470]: mgrmap e19: vm01.itvfys(active, since 52s), standbys: vm08.pllkti 2026-03-10T09:36:07.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:06 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1854288556' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:36:07.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:06 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1265268202' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T09:36:07.154 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-10T09:36:07.154 INFO:teuthology.orchestra.run.vm01.stdout: key = AQAH5q9p3/ENCRAAz99bQEB/rw5+L9bT3Hgbtg== 2026-03-10T09:36:07.196 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T09:36:07.196 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T09:36:07.196 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T09:36:07.232 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T09:36:07.397 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm08/config 2026-03-10T09:36:07.648 INFO:teuthology.orchestra.run.vm08.stdout:[client.1] 2026-03-10T09:36:07.649 INFO:teuthology.orchestra.run.vm08.stdout: key = AQAH5q9pb5+EJhAABnR0VdFTqbYiMO7ZUXj0pQ== 2026-03-10T09:36:07.711 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T09:36:07.711 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T09:36:07.711 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T09:36:07.741 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T09:36:07.741 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T09:36:07.741 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mgr dump --format=json 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: osdmap e25: 8 total, 8 up, 8 in 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1413842503' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1413842503' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 612 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: from='client.? 192.168.123.108:0/3803662129' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:36:07.823 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:07 vm08 ceph-mon[58470]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T09:36:07.902 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:08.017 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: osdmap e25: 8 total, 8 up, 8 in 2026-03-10T09:36:08.017 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1413842503' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:36:08.017 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1413842503' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T09:36:08.017 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 612 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:08.017 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: from='client.? 192.168.123.108:0/3803662129' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:36:08.017 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T09:36:08.018 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:07 vm01 ceph-mon[50888]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T09:36:08.135 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:08.199 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":19,"flags":0,"active_gid":14217,"active_name":"vm01.itvfys","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":3328478935},{"type":"v1","addr":"192.168.123.101:6801","nonce":3328478935}]},"active_addr":"192.168.123.101:6801/3328478935","active_change":"2026-03-10T09:35:13.515037+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14240,"name":"vm08.pllkti","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.101:8443/","prometheus":"http://192.168.123.101:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":5,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2304842960}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3566779568}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3849182807}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2717609284}]}]} 2026-03-10T09:36:08.200 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T09:36:08.200 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T09:36:08.201 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd dump --format=json 2026-03-10T09:36:08.358 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:08.565 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:08.566 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":25,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","created":"2026-03-10T09:34:24.264503+0000","modified":"2026-03-10T09:36:06.814816+0000","last_up_change":"2026-03-10T09:36:05.165341+0000","last_in_change":"2026-03-10T09:35:46.986147+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":11,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:36:01.553893+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"b112f42b-2b1e-413f-b116-f865f94b0c29","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6801","nonce":1166711813}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6803","nonce":1166711813}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6807","nonce":1166711813}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6805","nonce":1166711813}]},"public_addr":"192.168.123.108:6801/1166711813","cluster_addr":"192.168.123.108:6803/1166711813","heartbeat_back_addr":"192.168.123.108:6807/1166711813","heartbeat_front_addr":"192.168.123.108:6805/1166711813","state":["exists","up"]},{"osd":1,"uuid":"de2dce78-ca33-4b1c-9e69-a1a6c779ba19","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6803","nonce":3603837159}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6805","nonce":3603837159}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6809","nonce":3603837159}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6807","nonce":3603837159}]},"public_addr":"192.168.123.101:6803/3603837159","cluster_addr":"192.168.123.101:6805/3603837159","heartbeat_back_addr":"192.168.123.101:6809/3603837159","heartbeat_front_addr":"192.168.123.101:6807/3603837159","state":["exists","up"]},{"osd":2,"uuid":"52923810-6aca-4cac-8f49-9dce6a88ce87","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6808","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6809","nonce":1751226724}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6810","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6811","nonce":1751226724}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6814","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6815","nonce":1751226724}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6812","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6813","nonce":1751226724}]},"public_addr":"192.168.123.108:6809/1751226724","cluster_addr":"192.168.123.108:6811/1751226724","heartbeat_back_addr":"192.168.123.108:6815/1751226724","heartbeat_front_addr":"192.168.123.108:6813/1751226724","state":["exists","up"]},{"osd":3,"uuid":"42839f5e-af27-4202-b7cc-68e318d52cee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6811","nonce":3013306557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6813","nonce":3013306557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6817","nonce":3013306557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6815","nonce":3013306557}]},"public_addr":"192.168.123.101:6811/3013306557","cluster_addr":"192.168.123.101:6813/3013306557","heartbeat_back_addr":"192.168.123.101:6817/3013306557","heartbeat_front_addr":"192.168.123.101:6815/3013306557","state":["exists","up"]},{"osd":4,"uuid":"23dcd582-f844-4138-a1e1-2aa2b2311bc8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6816","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6817","nonce":3375933889}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6818","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6819","nonce":3375933889}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6822","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6823","nonce":3375933889}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6820","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6821","nonce":3375933889}]},"public_addr":"192.168.123.108:6817/3375933889","cluster_addr":"192.168.123.108:6819/3375933889","heartbeat_back_addr":"192.168.123.108:6823/3375933889","heartbeat_front_addr":"192.168.123.108:6821/3375933889","state":["exists","up"]},{"osd":5,"uuid":"2cbd4429-08c1-4c36-bb18-3b48502e7ba6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6819","nonce":3174081266}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6821","nonce":3174081266}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6825","nonce":3174081266}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6823","nonce":3174081266}]},"public_addr":"192.168.123.101:6819/3174081266","cluster_addr":"192.168.123.101:6821/3174081266","heartbeat_back_addr":"192.168.123.101:6825/3174081266","heartbeat_front_addr":"192.168.123.101:6823/3174081266","state":["exists","up"]},{"osd":6,"uuid":"5d5b5f7a-3dec-4b69-bb94-bd45de84c36c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6824","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6825","nonce":4197225113}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6826","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6827","nonce":4197225113}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6830","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6831","nonce":4197225113}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6828","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6829","nonce":4197225113}]},"public_addr":"192.168.123.108:6825/4197225113","cluster_addr":"192.168.123.108:6827/4197225113","heartbeat_back_addr":"192.168.123.108:6831/4197225113","heartbeat_front_addr":"192.168.123.108:6829/4197225113","state":["exists","up"]},{"osd":7,"uuid":"117b0fe8-68ff-45e8-b71b-20c4438c4bbe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6826","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6827","nonce":505627033}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6828","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6829","nonce":505627033}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6832","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6833","nonce":505627033}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6830","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6831","nonce":505627033}]},"public_addr":"192.168.123.101:6827/505627033","cluster_addr":"192.168.123.101:6829/505627033","heartbeat_back_addr":"192.168.123.101:6833/505627033","heartbeat_front_addr":"192.168.123.101:6831/505627033","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:54.924104+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:55.448286+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:57.738298+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:58.446323+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:59.325176+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:00.675781+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:01.305702+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:03.079763+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/237417781":"2026-03-11T09:35:13.514786+0000","192.168.123.101:0/3158952217":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/914506191":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/2020023795":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/1134624716":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6801/1217789111":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/969840687":"2026-03-11T09:34:46.390423+0000","192.168.123.101:0/4266255393":"2026-03-11T09:35:13.514786+0000","192.168.123.101:0/2622952359":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6800/2854054601":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6800/1217789111":"2026-03-11T09:34:34.933336+0000","192.168.123.101:6800/1826817318":"2026-03-11T09:35:13.514786+0000","192.168.123.101:6801/2854054601":"2026-03-11T09:34:46.390423+0000","192.168.123.101:0/3938110568":"2026-03-11T09:35:13.514786+0000","192.168.123.101:6801/1826817318":"2026-03-11T09:35:13.514786+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:36:08.612 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T09:36:08.612 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd dump --format=json 2026-03-10T09:36:08.768 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:08.884 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:08 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3988843512' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T09:36:08.884 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:08 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4169583061' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:36:08.980 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:08.980 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":25,"fsid":"362248b4-1c64-11f1-a99c-11af91d3124e","created":"2026-03-10T09:34:24.264503+0000","modified":"2026-03-10T09:36:06.814816+0000","last_up_change":"2026-03-10T09:36:05.165341+0000","last_in_change":"2026-03-10T09:35:46.986147+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":11,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T09:36:01.553893+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"24","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"b112f42b-2b1e-413f-b116-f865f94b0c29","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6801","nonce":1166711813}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6803","nonce":1166711813}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6807","nonce":1166711813}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":1166711813},{"type":"v1","addr":"192.168.123.108:6805","nonce":1166711813}]},"public_addr":"192.168.123.108:6801/1166711813","cluster_addr":"192.168.123.108:6803/1166711813","heartbeat_back_addr":"192.168.123.108:6807/1166711813","heartbeat_front_addr":"192.168.123.108:6805/1166711813","state":["exists","up"]},{"osd":1,"uuid":"de2dce78-ca33-4b1c-9e69-a1a6c779ba19","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":16,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6803","nonce":3603837159}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6805","nonce":3603837159}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6809","nonce":3603837159}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3603837159},{"type":"v1","addr":"192.168.123.101:6807","nonce":3603837159}]},"public_addr":"192.168.123.101:6803/3603837159","cluster_addr":"192.168.123.101:6805/3603837159","heartbeat_back_addr":"192.168.123.101:6809/3603837159","heartbeat_front_addr":"192.168.123.101:6807/3603837159","state":["exists","up"]},{"osd":2,"uuid":"52923810-6aca-4cac-8f49-9dce6a88ce87","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6808","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6809","nonce":1751226724}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6810","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6811","nonce":1751226724}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6814","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6815","nonce":1751226724}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6812","nonce":1751226724},{"type":"v1","addr":"192.168.123.108:6813","nonce":1751226724}]},"public_addr":"192.168.123.108:6809/1751226724","cluster_addr":"192.168.123.108:6811/1751226724","heartbeat_back_addr":"192.168.123.108:6815/1751226724","heartbeat_front_addr":"192.168.123.108:6813/1751226724","state":["exists","up"]},{"osd":3,"uuid":"42839f5e-af27-4202-b7cc-68e318d52cee","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":19,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6811","nonce":3013306557}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6813","nonce":3013306557}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6817","nonce":3013306557}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":3013306557},{"type":"v1","addr":"192.168.123.101:6815","nonce":3013306557}]},"public_addr":"192.168.123.101:6811/3013306557","cluster_addr":"192.168.123.101:6813/3013306557","heartbeat_back_addr":"192.168.123.101:6817/3013306557","heartbeat_front_addr":"192.168.123.101:6815/3013306557","state":["exists","up"]},{"osd":4,"uuid":"23dcd582-f844-4138-a1e1-2aa2b2311bc8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":20,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6816","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6817","nonce":3375933889}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6818","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6819","nonce":3375933889}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6822","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6823","nonce":3375933889}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6820","nonce":3375933889},{"type":"v1","addr":"192.168.123.108:6821","nonce":3375933889}]},"public_addr":"192.168.123.108:6817/3375933889","cluster_addr":"192.168.123.108:6819/3375933889","heartbeat_back_addr":"192.168.123.108:6823/3375933889","heartbeat_front_addr":"192.168.123.108:6821/3375933889","state":["exists","up"]},{"osd":5,"uuid":"2cbd4429-08c1-4c36-bb18-3b48502e7ba6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":21,"up_thru":21,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6819","nonce":3174081266}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6821","nonce":3174081266}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6825","nonce":3174081266}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":3174081266},{"type":"v1","addr":"192.168.123.101:6823","nonce":3174081266}]},"public_addr":"192.168.123.101:6819/3174081266","cluster_addr":"192.168.123.101:6821/3174081266","heartbeat_back_addr":"192.168.123.101:6825/3174081266","heartbeat_front_addr":"192.168.123.101:6823/3174081266","state":["exists","up"]},{"osd":6,"uuid":"5d5b5f7a-3dec-4b69-bb94-bd45de84c36c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":22,"up_thru":22,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6824","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6825","nonce":4197225113}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6826","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6827","nonce":4197225113}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6830","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6831","nonce":4197225113}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6828","nonce":4197225113},{"type":"v1","addr":"192.168.123.108:6829","nonce":4197225113}]},"public_addr":"192.168.123.108:6825/4197225113","cluster_addr":"192.168.123.108:6827/4197225113","heartbeat_back_addr":"192.168.123.108:6831/4197225113","heartbeat_front_addr":"192.168.123.108:6829/4197225113","state":["exists","up"]},{"osd":7,"uuid":"117b0fe8-68ff-45e8-b71b-20c4438c4bbe","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6826","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6827","nonce":505627033}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6828","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6829","nonce":505627033}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6832","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6833","nonce":505627033}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6830","nonce":505627033},{"type":"v1","addr":"192.168.123.101:6831","nonce":505627033}]},"public_addr":"192.168.123.101:6827/505627033","cluster_addr":"192.168.123.101:6829/505627033","heartbeat_back_addr":"192.168.123.101:6833/505627033","heartbeat_front_addr":"192.168.123.101:6831/505627033","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:54.924104+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:55.448286+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:57.738298+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:58.446323+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:35:59.325176+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:00.675781+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:01.305702+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T09:36:03.079763+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/237417781":"2026-03-11T09:35:13.514786+0000","192.168.123.101:0/3158952217":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/914506191":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/2020023795":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/1134624716":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6801/1217789111":"2026-03-11T09:34:34.933336+0000","192.168.123.101:0/969840687":"2026-03-11T09:34:46.390423+0000","192.168.123.101:0/4266255393":"2026-03-11T09:35:13.514786+0000","192.168.123.101:0/2622952359":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6800/2854054601":"2026-03-11T09:34:46.390423+0000","192.168.123.101:6800/1217789111":"2026-03-11T09:34:34.933336+0000","192.168.123.101:6800/1826817318":"2026-03-11T09:35:13.514786+0000","192.168.123.101:6801/2854054601":"2026-03-11T09:34:46.390423+0000","192.168.123.101:0/3938110568":"2026-03-11T09:35:13.514786+0000","192.168.123.101:6801/1826817318":"2026-03-11T09:35:13.514786+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T09:36:09.038 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.0 flush_pg_stats 2026-03-10T09:36:09.038 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.1 flush_pg_stats 2026-03-10T09:36:09.038 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.2 flush_pg_stats 2026-03-10T09:36:09.039 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.3 flush_pg_stats 2026-03-10T09:36:09.039 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.4 flush_pg_stats 2026-03-10T09:36:09.039 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.5 flush_pg_stats 2026-03-10T09:36:09.039 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.6 flush_pg_stats 2026-03-10T09:36:09.039 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph tell osd.7 flush_pg_stats 2026-03-10T09:36:09.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:08 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3988843512' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T09:36:09.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:08 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4169583061' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:36:09.669 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:09.698 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:09.721 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:09.821 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:09.825 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:09.826 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:09.841 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:10.000 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:10.027 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:09 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/4232166087' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:36:10.027 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:09 vm01 ceph-mon[50888]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:10.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:09 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/4232166087' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T09:36:10.086 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:09 vm08 ceph-mon[58470]: pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:10.225 INFO:teuthology.orchestra.run.vm01.stdout:68719476740 2026-03-10T09:36:10.225 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.1 2026-03-10T09:36:10.494 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:10.577 INFO:teuthology.orchestra.run.vm01.stdout:68719476740 2026-03-10T09:36:10.577 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.0 2026-03-10T09:36:10.670 INFO:teuthology.orchestra.run.vm01.stdout:85899345923 2026-03-10T09:36:10.670 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.2 2026-03-10T09:36:10.704 INFO:teuthology.orchestra.run.vm01.stdout:85899345923 2026-03-10T09:36:10.704 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.4 2026-03-10T09:36:10.707 INFO:teuthology.orchestra.run.vm01.stdout:90194313219 2026-03-10T09:36:10.708 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.5 2026-03-10T09:36:10.870 INFO:teuthology.orchestra.run.vm01.stdout:103079215106 2026-03-10T09:36:10.871 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.7 2026-03-10T09:36:10.908 INFO:teuthology.orchestra.run.vm01.stdout:94489280515 2026-03-10T09:36:10.908 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.6 2026-03-10T09:36:10.984 INFO:teuthology.orchestra.run.vm01.stdout:81604378627 2026-03-10T09:36:10.984 INFO:teuthology.orchestra.run.vm01.stdout:68719476739 2026-03-10T09:36:10.984 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.3 2026-03-10T09:36:11.038 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:11.108 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476740 got 68719476739 for osd.1 2026-03-10T09:36:11.126 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:11 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3321399651' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:36:11.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:11 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3321399651' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:36:11.472 INFO:teuthology.orchestra.run.vm01.stdout:68719476739 2026-03-10T09:36:11.544 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:11.544 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:11.600 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476740 got 68719476739 for osd.0 2026-03-10T09:36:11.641 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:11.809 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:11.924 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:12.041 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:12.070 INFO:teuthology.orchestra.run.vm01.stdout:85899345923 2026-03-10T09:36:12.108 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.1 2026-03-10T09:36:12.228 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:12 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1675414294' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:36:12.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:12 vm01 ceph-mon[50888]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:12.238 INFO:teuthology.orchestra.run.vm01.stdout:90194313219 2026-03-10T09:36:12.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:12 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1675414294' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:36:12.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:12 vm08 ceph-mon[58470]: pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:12.397 INFO:tasks.cephadm.ceph_manager.ceph:need seq 90194313219 got 90194313219 for osd.5 2026-03-10T09:36:12.398 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:12.399 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345923 got 85899345923 for osd.2 2026-03-10T09:36:12.399 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:12.524 INFO:teuthology.orchestra.run.vm01.stdout:94489280515 2026-03-10T09:36:12.528 INFO:teuthology.orchestra.run.vm01.stdout:85899345924 2026-03-10T09:36:12.552 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:12.600 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph osd last-stat-seq osd.0 2026-03-10T09:36:12.627 INFO:teuthology.orchestra.run.vm01.stdout:103079215107 2026-03-10T09:36:12.682 INFO:tasks.cephadm.ceph_manager.ceph:need seq 94489280515 got 94489280515 for osd.6 2026-03-10T09:36:12.682 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:12.728 INFO:tasks.cephadm.ceph_manager.ceph:need seq 85899345923 got 85899345924 for osd.4 2026-03-10T09:36:12.728 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:12.769 INFO:tasks.cephadm.ceph_manager.ceph:need seq 103079215106 got 103079215107 for osd.7 2026-03-10T09:36:12.769 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:12.814 INFO:teuthology.orchestra.run.vm01.stdout:81604378628 2026-03-10T09:36:12.934 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:12.944 INFO:teuthology.orchestra.run.vm01.stdout:68719476740 2026-03-10T09:36:12.950 INFO:tasks.cephadm.ceph_manager.ceph:need seq 81604378627 got 81604378628 for osd.3 2026-03-10T09:36:12.950 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:12.994 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476740 got 68719476740 for osd.1 2026-03-10T09:36:12.994 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2434391210' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3839646637' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1569720024' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/1890080947' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3358291808' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3370274983' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T09:36:13.074 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:13 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/3149319596' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:36:13.174 INFO:teuthology.orchestra.run.vm01.stdout:68719476740 2026-03-10T09:36:13.231 INFO:tasks.cephadm.ceph_manager.ceph:need seq 68719476740 got 68719476740 for osd.0 2026-03-10T09:36:13.231 DEBUG:teuthology.parallel:result is None 2026-03-10T09:36:13.231 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T09:36:13.231 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph pg dump --format=json 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2434391210' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3839646637' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1569720024' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/1890080947' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3358291808' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3370274983' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T09:36:13.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:13 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/3149319596' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T09:36:13.392 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:13.603 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:13.603 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T09:36:13.670 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":43,"stamp":"2026-03-10T09:36:13.530118+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":627800,"kb_used_data":3212,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167111592,"statfs":{"total":171765137408,"available":171122270208,"internally_reserved":0,"allocated":3289088,"data_stored":2086088,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.000462"},"pg_stats":[{"pgid":"1.0","version":"23'32","reported_seq":59,"reported_epoch":25,"state":"active+clean","last_fresh":"2026-03-10T09:36:07.308749+0000","last_change":"2026-03-10T09:36:04.413155+0000","last_active":"2026-03-10T09:36:07.308749+0000","last_peered":"2026-03-10T09:36:07.308749+0000","last_clean":"2026-03-10T09:36:07.308749+0000","last_became_active":"2026-03-10T09:36:04.413024+0000","last_became_peered":"2026-03-10T09:36:04.413024+0000","last_unstale":"2026-03-10T09:36:07.308749+0000","last_undegraded":"2026-03-10T09:36:07.308749+0000","last_fullsized":"2026-03-10T09:36:07.308749+0000","mapping_epoch":22,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":23,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T09:36:01.775155+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T09:36:01.775155+0000","last_clean_scrub_stamp":"2026-03-10T09:36:01.775155+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:26:05.426290+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":24,"seq":103079215107,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436672,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530752,"statfs":{"total":21470642176,"available":21023490048,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":22,"seq":94489280516,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27656,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939768,"statfs":{"total":21470642176,"available":21442322432,"internally_reserved":0,"allocated":700416,"data_stored":547811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":21,"seq":90194313220,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27524,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939900,"statfs":{"total":21470642176,"available":21442457600,"internally_reserved":0,"allocated":700416,"data_stored":547811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":20,"seq":85899345924,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27656,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939768,"statfs":{"total":21470642176,"available":21442322432,"internally_reserved":0,"allocated":700416,"data_stored":547811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":20,"seq":85899345924,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27076,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940348,"statfs":{"total":21470642176,"available":21442916352,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":19,"seq":81604378628,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":16,"seq":68719476741,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":16,"seq":68719476741,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T09:36:13.670 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph pg dump --format=json 2026-03-10T09:36:13.834 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:14.049 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:14.049 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T09:36:14.114 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":43,"stamp":"2026-03-10T09:36:13.530118+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":627800,"kb_used_data":3212,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167111592,"statfs":{"total":171765137408,"available":171122270208,"internally_reserved":0,"allocated":3289088,"data_stored":2086088,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12712,"internal_metadata":219663960},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.000462"},"pg_stats":[{"pgid":"1.0","version":"23'32","reported_seq":59,"reported_epoch":25,"state":"active+clean","last_fresh":"2026-03-10T09:36:07.308749+0000","last_change":"2026-03-10T09:36:04.413155+0000","last_active":"2026-03-10T09:36:07.308749+0000","last_peered":"2026-03-10T09:36:07.308749+0000","last_clean":"2026-03-10T09:36:07.308749+0000","last_became_active":"2026-03-10T09:36:04.413024+0000","last_became_peered":"2026-03-10T09:36:04.413024+0000","last_unstale":"2026-03-10T09:36:07.308749+0000","last_undegraded":"2026-03-10T09:36:07.308749+0000","last_fullsized":"2026-03-10T09:36:07.308749+0000","mapping_epoch":22,"log_start":"0'0","ondisk_log_start":"0'0","created":21,"last_epoch_clean":23,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T09:36:01.775155+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T09:36:01.775155+0000","last_clean_scrub_stamp":"2026-03-10T09:36:01.775155+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T19:26:05.426290+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":24,"seq":103079215107,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436672,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530752,"statfs":{"total":21470642176,"available":21023490048,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1585,"internal_metadata":27457999},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":22,"seq":94489280516,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27656,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939768,"statfs":{"total":21470642176,"available":21442322432,"internally_reserved":0,"allocated":700416,"data_stored":547811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":21,"seq":90194313220,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27524,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939900,"statfs":{"total":21470642176,"available":21442457600,"internally_reserved":0,"allocated":700416,"data_stored":547811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":20,"seq":85899345924,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27656,"kb_used_data":684,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939768,"statfs":{"total":21470642176,"available":21442322432,"internally_reserved":0,"allocated":700416,"data_stored":547811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":20,"seq":85899345924,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27076,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940348,"statfs":{"total":21470642176,"available":21442916352,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":19,"seq":81604378628,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1588,"internal_metadata":27457996},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":16,"seq":68719476741,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":16,"seq":68719476741,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":27072,"kb_used_data":232,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940352,"statfs":{"total":21470642176,"available":21442920448,"internally_reserved":0,"allocated":237568,"data_stored":88531,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T09:36:14.115 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T09:36:14.115 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T09:36:14.115 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T09:36:14.115 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph health --format=json 2026-03-10T09:36:14.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:14 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/894290395' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:36:14.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:14 vm01 ceph-mon[50888]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:14.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:14 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:36:14.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:14 vm01 ceph-mon[50888]: from='client.14542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:36:14.282 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:14.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:14 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/894290395' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T09:36:14.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:14 vm08 ceph-mon[58470]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:14.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:14 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:36:14.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:14 vm08 ceph-mon[58470]: from='client.14542 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:36:14.521 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T09:36:14.521 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T09:36:14.590 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T09:36:14.590 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T09:36:14.590 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T09:36:14.592 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm01.local 2026-03-10T09:36:14.592 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'ceph orch status' 2026-03-10T09:36:14.754 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:14.965 INFO:teuthology.orchestra.run.vm01.stdout:Backend: cephadm 2026-03-10T09:36:14.965 INFO:teuthology.orchestra.run.vm01.stdout:Available: Yes 2026-03-10T09:36:14.966 INFO:teuthology.orchestra.run.vm01.stdout:Paused: No 2026-03-10T09:36:15.036 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'ceph orch ps' 2026-03-10T09:36:15.214 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:15.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:15 vm01 ceph-mon[50888]: from='client.14546 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:36:15.229 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:15 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/446416687' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T09:36:15.335 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:15 vm08 ceph-mon[58470]: from='client.14546 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T09:36:15.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:15 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/446416687' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:alertmanager.vm01 vm01 *:9093,9094 running (44s) 12s ago 76s 23.6M - 0.25.0 c8568f914cd2 25cc08979536 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:ceph-exporter.vm01 vm01 *:9926 running (83s) 12s ago 83s 8330k - 19.2.3-678-ge911bdeb 654f31e6858e dd2bc0b1b8a1 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:ceph-exporter.vm08 vm08 *:9926 running (57s) 13s ago 57s 6577k - 19.2.3-678-ge911bdeb 654f31e6858e fc6392dfd5f9 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:crash.vm01 vm01 running (82s) 12s ago 82s 7616k - 19.2.3-678-ge911bdeb 654f31e6858e 61603169857c 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:crash.vm08 vm08 running (56s) 13s ago 56s 7616k - 19.2.3-678-ge911bdeb 654f31e6858e 91b2373cb759 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:grafana.vm01 vm01 *:3000 running (43s) 12s ago 70s 77.0M - 10.4.0 c8b91775d855 1fbf4c6a9b8a 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:mgr.vm01.itvfys vm01 *:9283,8765,8443 running (109s) 12s ago 109s 541M - 19.2.3-678-ge911bdeb 654f31e6858e 2709ebb88975 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:mgr.vm08.pllkti vm08 *:8443,9283,8765 running (53s) 13s ago 53s 486M - 19.2.3-678-ge911bdeb 654f31e6858e ca9882325e83 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:mon.vm01 vm01 running (110s) 12s ago 111s 47.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 43041d34ec15 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:mon.vm08 vm08 running (52s) 13s ago 52s 42.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 30cd565b20a3 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:node-exporter.vm01 vm01 *:9100 running (80s) 12s ago 80s 9164k - 1.7.0 72c9c2088986 90be9b5d31b7 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:node-exporter.vm08 vm08 *:9100 running (54s) 13s ago 54s 8610k - 1.7.0 72c9c2088986 b8c6de2dae70 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:osd.0 vm08 running (23s) 13s ago 22s 31.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 07b0b1e39c9a 2026-03-10T09:36:15.437 INFO:teuthology.orchestra.run.vm01.stdout:osd.1 vm01 running (22s) 12s ago 22s 27.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f6911b57ce3c 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:osd.2 vm08 running (20s) 13s ago 20s 51.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2a027fec4985 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:osd.3 vm01 running (19s) 12s ago 19s 29.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cc0a58dab766 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:osd.4 vm08 running (18s) 13s ago 18s 51.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2f6d08bccc0e 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:osd.5 vm01 running (17s) 12s ago 17s 30.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 60af614dbd9a 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:osd.6 vm08 running (16s) 13s ago 16s 25.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 673121e0db01 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:osd.7 vm01 running (15s) 12s ago 15s 14.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8589efefd949 2026-03-10T09:36:15.438 INFO:teuthology.orchestra.run.vm01.stdout:prometheus.vm01 vm01 *:9095 running (42s) 12s ago 65s 31.4M - 2.51.0 1d3b7f56885b d951e438686e 2026-03-10T09:36:15.486 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'ceph orch ls' 2026-03-10T09:36:15.646 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:15.863 INFO:teuthology.orchestra.run.vm01.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T09:36:15.863 INFO:teuthology.orchestra.run.vm01.stdout:alertmanager ?:9093,9094 1/1 12s ago 94s count:1 2026-03-10T09:36:15.863 INFO:teuthology.orchestra.run.vm01.stdout:ceph-exporter ?:9926 2/2 13s ago 96s * 2026-03-10T09:36:15.863 INFO:teuthology.orchestra.run.vm01.stdout:crash 2/2 13s ago 96s * 2026-03-10T09:36:15.863 INFO:teuthology.orchestra.run.vm01.stdout:grafana ?:3000 1/1 12s ago 95s count:1 2026-03-10T09:36:15.864 INFO:teuthology.orchestra.run.vm01.stdout:mgr 2/2 13s ago 96s count:2 2026-03-10T09:36:15.864 INFO:teuthology.orchestra.run.vm01.stdout:mon 2/2 13s ago 81s vm01:192.168.123.101=vm01;vm08:192.168.123.108=vm08;count:2 2026-03-10T09:36:15.864 INFO:teuthology.orchestra.run.vm01.stdout:node-exporter ?:9100 2/2 13s ago 95s * 2026-03-10T09:36:15.864 INFO:teuthology.orchestra.run.vm01.stdout:osd.all-available-devices 8 13s ago 44s * 2026-03-10T09:36:15.864 INFO:teuthology.orchestra.run.vm01.stdout:prometheus ?:9095 1/1 12s ago 95s count:1 2026-03-10T09:36:15.922 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'ceph orch host ls' 2026-03-10T09:36:16.051 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:16 vm01 ceph-mon[50888]: from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:16.051 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:16 vm01 ceph-mon[50888]: from='client.14558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:16.084 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:16.290 INFO:teuthology.orchestra.run.vm01.stdout:HOST ADDR LABELS STATUS 2026-03-10T09:36:16.290 INFO:teuthology.orchestra.run.vm01.stdout:vm01 192.168.123.101 2026-03-10T09:36:16.290 INFO:teuthology.orchestra.run.vm01.stdout:vm08 192.168.123.108 2026-03-10T09:36:16.290 INFO:teuthology.orchestra.run.vm01.stdout:2 hosts in cluster 2026-03-10T09:36:16.322 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:16 vm01 ceph-mon[50888]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:16.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:16 vm08 ceph-mon[58470]: from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:16.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:16 vm08 ceph-mon[58470]: from='client.14558 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:16.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:16 vm08 ceph-mon[58470]: pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:16.393 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'ceph orch device ls' 2026-03-10T09:36:16.558 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 12s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vdb hdd DWNBRSTVMM01001 20.0G No 12s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vdc hdd DWNBRSTVMM01002 20.0G No 12s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vdd hdd DWNBRSTVMM01003 20.0G No 12s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vde hdd DWNBRSTVMM01004 20.0G No 12s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 13s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vdb hdd DWNBRSTVMM08001 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vdc hdd DWNBRSTVMM08002 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vdd hdd DWNBRSTVMM08003 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.768 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vde hdd DWNBRSTVMM08004 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:16.834 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'ceph orch ls | grep '"'"'^osd.all-available-devices '"'"'' 2026-03-10T09:36:17.004 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:17.118 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:17 vm01 ceph-mon[50888]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:17.118 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:17 vm01 ceph-mon[50888]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:17.234 INFO:teuthology.orchestra.run.vm01.stdout:osd.all-available-devices 8 14s ago 45s * 2026-03-10T09:36:17.287 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T09:36:17.289 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm01.local 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- bash -c 'set -e 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> set -x 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> ceph orch ps 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> ceph orch device ls 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> DEVID=$(ceph device ls | grep osd.1 | awk '"'"'{print $1}'"'"') 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> HOST=$(ceph orch device ls | grep $DEVID | awk '"'"'{print $1}'"'"') 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> DEV=$(ceph orch device ls | grep $DEVID | awk '"'"'{print $2}'"'"') 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> echo "host $HOST, dev $DEV, devid $DEVID" 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> ceph orch osd rm 1 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> while ceph orch osd rm status | grep ^1 ; do sleep 5 ; done 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> ceph orch device zap $HOST $DEV --force 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> ceph orch daemon add osd $HOST:$DEV 2026-03-10T09:36:17.289 DEBUG:teuthology.orchestra.run.vm01:> while ! ceph osd dump | grep osd.1 | grep up ; do sleep 5 ; done 2026-03-10T09:36:17.290 DEBUG:teuthology.orchestra.run.vm01:> ' 2026-03-10T09:36:17.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:17 vm08 ceph-mon[58470]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:17.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:17 vm08 ceph-mon[58470]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:17.465 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:17.538 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch ps 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:alertmanager.vm01 vm01 *:9093,9094 running (46s) 14s ago 79s 23.6M - 0.25.0 c8568f914cd2 25cc08979536 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:ceph-exporter.vm01 vm01 *:9926 running (85s) 14s ago 85s 8330k - 19.2.3-678-ge911bdeb 654f31e6858e dd2bc0b1b8a1 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:ceph-exporter.vm08 vm08 *:9926 running (59s) 15s ago 59s 6577k - 19.2.3-678-ge911bdeb 654f31e6858e fc6392dfd5f9 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:crash.vm01 vm01 running (84s) 14s ago 84s 7616k - 19.2.3-678-ge911bdeb 654f31e6858e 61603169857c 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:crash.vm08 vm08 running (59s) 15s ago 58s 7616k - 19.2.3-678-ge911bdeb 654f31e6858e 91b2373cb759 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:grafana.vm01 vm01 *:3000 running (46s) 14s ago 72s 77.0M - 10.4.0 c8b91775d855 1fbf4c6a9b8a 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:mgr.vm01.itvfys vm01 *:9283,8765,8443 running (112s) 14s ago 111s 541M - 19.2.3-678-ge911bdeb 654f31e6858e 2709ebb88975 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:mgr.vm08.pllkti vm08 *:8443,9283,8765 running (55s) 15s ago 55s 486M - 19.2.3-678-ge911bdeb 654f31e6858e ca9882325e83 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:mon.vm01 vm01 running (112s) 14s ago 113s 47.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 43041d34ec15 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:mon.vm08 vm08 running (54s) 15s ago 54s 42.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 30cd565b20a3 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:node-exporter.vm01 vm01 *:9100 running (82s) 14s ago 82s 9164k - 1.7.0 72c9c2088986 90be9b5d31b7 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:node-exporter.vm08 vm08 *:9100 running (56s) 15s ago 56s 8610k - 1.7.0 72c9c2088986 b8c6de2dae70 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.0 vm08 running (25s) 15s ago 25s 31.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 07b0b1e39c9a 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.1 vm01 running (24s) 14s ago 24s 27.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f6911b57ce3c 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.2 vm08 running (23s) 15s ago 23s 51.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2a027fec4985 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.3 vm01 running (22s) 14s ago 22s 29.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e cc0a58dab766 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.4 vm08 running (21s) 15s ago 20s 51.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 2f6d08bccc0e 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.5 vm01 running (19s) 14s ago 19s 30.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 60af614dbd9a 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.6 vm08 running (18s) 15s ago 18s 25.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 673121e0db01 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:osd.7 vm01 running (17s) 14s ago 17s 14.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8589efefd949 2026-03-10T09:36:17.689 INFO:teuthology.orchestra.run.vm01.stdout:prometheus.vm01 vm01 *:9095 running (45s) 14s ago 68s 31.4M - 2.51.0 1d3b7f56885b d951e438686e 2026-03-10T09:36:17.697 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch device ls 2026-03-10T09:36:17.851 INFO:teuthology.orchestra.run.vm01.stdout:HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS 2026-03-10T09:36:17.851 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 13s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T09:36:17.851 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vdb hdd DWNBRSTVMM01001 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.851 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vdc hdd DWNBRSTVMM01002 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vdd hdd DWNBRSTVMM01003 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm01 /dev/vde hdd DWNBRSTVMM01004 20.0G No 13s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/sr0 hdd QEMU_DVD-ROM_QM00003 366k No 14s ago Has a FileSystem, Insufficient space (<5GB) 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vdb hdd DWNBRSTVMM08001 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vdc hdd DWNBRSTVMM08002 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vdd hdd DWNBRSTVMM08003 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.852 INFO:teuthology.orchestra.run.vm01.stdout:vm08 /dev/vde hdd DWNBRSTVMM08004 20.0G No 14s ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected 2026-03-10T09:36:17.861 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph device ls 2026-03-10T09:36:17.861 INFO:teuthology.orchestra.run.vm01.stderr:++ awk '{print $1}' 2026-03-10T09:36:17.862 INFO:teuthology.orchestra.run.vm01.stderr:++ grep osd.1 2026-03-10T09:36:18.015 INFO:teuthology.orchestra.run.vm01.stderr:+ DEVID=DWNBRSTVMM01001 2026-03-10T09:36:18.015 INFO:teuthology.orchestra.run.vm01.stderr:++ grep DWNBRSTVMM01001 2026-03-10T09:36:18.015 INFO:teuthology.orchestra.run.vm01.stderr:++ awk '{print $1}' 2026-03-10T09:36:18.016 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph orch device ls 2026-03-10T09:36:18.193 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:18 vm01 ceph-mon[50888]: from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:18.193 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:18 vm01 ceph-mon[50888]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:18.193 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:18 vm01 ceph-mon[50888]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:18.193 INFO:teuthology.orchestra.run.vm01.stderr:+ HOST=vm01 2026-03-10T09:36:18.193 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph orch device ls 2026-03-10T09:36:18.193 INFO:teuthology.orchestra.run.vm01.stderr:++ awk '{print $2}' 2026-03-10T09:36:18.195 INFO:teuthology.orchestra.run.vm01.stderr:++ grep DWNBRSTVMM01001 2026-03-10T09:36:18.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:18 vm08 ceph-mon[58470]: from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:18.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:18 vm08 ceph-mon[58470]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:18.336 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:18 vm08 ceph-mon[58470]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:18.360 INFO:teuthology.orchestra.run.vm01.stderr:+ DEV=/dev/vdb 2026-03-10T09:36:18.360 INFO:teuthology.orchestra.run.vm01.stderr:+ echo 'host vm01, dev /dev/vdb, devid DWNBRSTVMM01001' 2026-03-10T09:36:18.360 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch osd rm 1 2026-03-10T09:36:18.360 INFO:teuthology.orchestra.run.vm01.stdout:host vm01, dev /dev/vdb, devid DWNBRSTVMM01001 2026-03-10T09:36:18.527 INFO:teuthology.orchestra.run.vm01.stdout:Scheduled OSD(s) for removal. 2026-03-10T09:36:18.527 INFO:teuthology.orchestra.run.vm01.stdout:VG/LV for the OSDs won't be zapped (--zap wasn't passed). 2026-03-10T09:36:18.527 INFO:teuthology.orchestra.run.vm01.stdout:Run the `ceph-volume lvm zap` command with `--destroy` against the VG/LV if you want them to be destroyed. 2026-03-10T09:36:18.549 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch osd rm status 2026-03-10T09:36:18.549 INFO:teuthology.orchestra.run.vm01.stderr:+ grep '^1' 2026-03-10T09:36:18.726 INFO:teuthology.orchestra.run.vm01.stdout:1 vm01 started 0 False False False 2026-03-10T09:36:18.726 INFO:teuthology.orchestra.run.vm01.stderr:+ sleep 5 2026-03-10T09:36:19.072 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='client.14582 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='client.14588 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='client.14596 -' entity='client.admin' cmd=[{"prefix": "orch osd rm", "osd_id": ["1"], "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd crush tree", "format": "json"}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: osd.1 crush weight is 0.0194854736328125 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:19.073 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:19 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='client.14582 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='client.14588 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='client.14596 -' entity='client.admin' cmd=[{"prefix": "orch osd rm", "osd_id": ["1"], "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd crush tree", "format": "json"}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: osd.1 crush weight is 0.0194854736328125 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:19.116 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:19 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.286 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01[50884]: 2026-03-10T09:36:20.008+0000 7f8eba145640 -1 mon.vm01@0(leader).osd e25 definitely_dead 0 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch osd rm status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd safe-to-destroy", "ids": ["1"]}]: dispatch 2026-03-10T09:36:20.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:20 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd down", "ids": ["1"]}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='client.14600 -' entity='client.admin' cmd=[{"prefix": "orch osd rm status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd safe-to-destroy", "ids": ["1"]}]: dispatch 2026-03-10T09:36:20.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:20 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd down", "ids": ["1"]}]: dispatch 2026-03-10T09:36:21.478 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:21 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:21.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:21 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:21.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:21 vm01 ceph-mon[50888]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd safe-to-destroy", "ids": ["1"]}]: dispatch 2026-03-10T09:36:21.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:21 vm01 ceph-mon[50888]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T09:36:21.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:21 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd down", "ids": ["1"]}]': finished 2026-03-10T09:36:21.479 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:21 vm01 ceph-mon[50888]: osdmap e26: 8 total, 7 up, 8 in 2026-03-10T09:36:21.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:21 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:21.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:21 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd df", "format": "json"}]: dispatch 2026-03-10T09:36:21.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:21 vm08 ceph-mon[58470]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "osd safe-to-destroy", "ids": ["1"]}]: dispatch 2026-03-10T09:36:21.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:21 vm08 ceph-mon[58470]: Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T09:36:21.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:21 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd down", "ids": ["1"]}]': finished 2026-03-10T09:36:21.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:21 vm08 ceph-mon[58470]: osdmap e26: 8 total, 7 up, 8 in 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: osd.1 now down 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: Removing daemon osd.1 from vm01 -- ports [] 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: Removing key for osd.1 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth rm", "entity": "osd.1"}]: dispatch 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth rm", "entity": "osd.1"}]': finished 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: Successfully removed osd.1 on vm01 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd purge-actual", "id": 1, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T09:36:22.301 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: Cluster is now healthy 2026-03-10T09:36:22.302 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd purge-actual", "id": 1, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:36:22.302 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: osdmap e27: 7 total, 7 up, 7 in 2026-03-10T09:36:22.302 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: Successfully purged osd.1 on vm01 2026-03-10T09:36:22.302 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:22.302 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:22 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: osd.1 now down 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: Removing daemon osd.1 from vm01 -- ports [] 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 613 MiB used, 159 GiB / 160 GiB avail 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: Removing key for osd.1 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth rm", "entity": "osd.1"}]: dispatch 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "auth rm", "entity": "osd.1"}]': finished 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: Successfully removed osd.1 on vm01 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd purge-actual", "id": 1, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: Cluster is now healthy 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd='[{"prefix": "osd purge-actual", "id": 1, "yes_i_really_mean_it": true}]': finished 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: osdmap e27: 7 total, 7 up, 7 in 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: Successfully purged osd.1 on vm01 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:22.475 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:22 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:23.735 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:23 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.735 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:23 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.735 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:23 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.735 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:23 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.735 INFO:teuthology.orchestra.run.vm01.stderr:+ grep '^1' 2026-03-10T09:36:23.735 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch osd rm status 2026-03-10T09:36:23.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:23 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:23.903 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch device zap vm01 /dev/vdb --force 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 586 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:24 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 586 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:24.853 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:24 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.643 INFO:teuthology.orchestra.run.vm01.stdout:zap successful for /dev/vdb on vm01 2026-03-10T09:36:25.658 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph orch daemon add osd vm01:/dev/vdb 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch osd rm status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch device zap", "hostname": "vm01", "path": "/dev/vdb", "force": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: Zap device vm01:/dev/vdb 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:25 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:25.965 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch osd rm status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='client.14608 -' entity='client.admin' cmd=[{"prefix": "orch device zap", "hostname": "vm01", "path": "/dev/vdb", "force": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: Zap device vm01:/dev/vdb 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:25.966 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:25 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 586 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: zap successful for /dev/vdb on vm01 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:26.786 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:26 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 586 MiB used, 139 GiB / 140 GiB avail 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: zap successful for /dev/vdb on vm01 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:26.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:26 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2629394854' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bcdb6c8-a930-4643-8828-83d5b74f60e7"}]: dispatch 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2629394854' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8bcdb6c8-a930-4643-8828-83d5b74f60e7"}]': finished 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: osdmap e28: 8 total, 7 up, 8 in 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='client.? 192.168.123.101:0/2386425066' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:27 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2629394854' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bcdb6c8-a930-4643-8828-83d5b74f60e7"}]: dispatch 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2629394854' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8bcdb6c8-a930-4643-8828-83d5b74f60e7"}]': finished 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: osdmap e28: 8 total, 7 up, 8 in 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='client.? 192.168.123.101:0/2386425066' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:27.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:27 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:28 vm08 ceph-mon[58470]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:28.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:28 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:36:28.978 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:28 vm01 ceph-mon[50888]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:28.979 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:28 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T09:36:30.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:30 vm01 ceph-mon[50888]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:30.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:30 vm08 ceph-mon[58470]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:32.184 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: Detected new or changed devices on vm01 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: Deploying daemon osd.1 on vm01 2026-03-10T09:36:32.185 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:32 vm01 ceph-mon[50888]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: Detected new or changed devices on vm01 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: Deploying daemon osd.1 on vm01 2026-03-10T09:36:32.586 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:32 vm08 ceph-mon[58470]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:33.728 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:33.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:33.729 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:33 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:33.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T09:36:33.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:33.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:33 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:34.550 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:34 vm01 ceph-mon[50888]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:34.550 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:34.550 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:34 vm01 ceph-mon[50888]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:34.617 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 1 on host 'vm01' 2026-03-10T09:36:34.626 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph osd dump 2026-03-10T09:36:34.626 INFO:teuthology.orchestra.run.vm01.stderr:+ grep up 2026-03-10T09:36:34.629 INFO:teuthology.orchestra.run.vm01.stderr:+ grep osd.1 2026-03-10T09:36:34.797 INFO:teuthology.orchestra.run.vm01.stdout:osd.1 down in weight 1 up_from 0 up_thru 0 down_at 0 last_clean_interval [0,0) exists,new 8bcdb6c8-a930-4643-8828-83d5b74f60e7 2026-03-10T09:36:34.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:34 vm08 ceph-mon[58470]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-10T09:36:34.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:34.836 INFO:journalctl@ceph.mon.vm08.vm08.stdout:Mar 10 09:36:34 vm08 ceph-mon[58470]: from='mgr.14217 192.168.123.101:0/1048626713' entity='mgr.vm01.itvfys' 2026-03-10T09:36:34.938 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T09:36:34.940 INFO:tasks.cephadm:Teardown begin 2026-03-10T09:36:34.940 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:36:34.984 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:36:35.011 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T09:36:35.011 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 362248b4-1c64-11f1-a99c-11af91d3124e -- ceph mgr module disable cephadm 2026-03-10T09:36:35.245 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/mon.vm01/config 2026-03-10T09:36:35.271 INFO:teuthology.orchestra.run.vm01.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T09:36:35.290 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T09:36:35.291 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T09:36:35.291 DEBUG:teuthology.orchestra.run.vm01:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:36:35.308 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T09:36:35.323 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T09:36:35.323 INFO:tasks.cephadm.mon.vm01:Stopping mon.vm01... 2026-03-10T09:36:35.323 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01 2026-03-10T09:36:35.650 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:35 vm01 systemd[1]: Stopping Ceph mon.vm01 for 362248b4-1c64-11f1-a99c-11af91d3124e... 2026-03-10T09:36:35.650 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:35 vm01 ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01[50884]: 2026-03-10T09:36:35.507+0000 7f8ebf950640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.vm01 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T09:36:35.650 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:35 vm01 ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01[50884]: 2026-03-10T09:36:35.507+0000 7f8ebf950640 -1 mon.vm01@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T09:36:35.650 INFO:journalctl@ceph.mon.vm01.vm01.stdout:Mar 10 09:36:35 vm01 podman[91831]: 2026-03-10 09:36:35.634482188 +0000 UTC m=+0.154713612 container died 43041d34ec15f86df059c60a1760ed17e81dc58fc34ca6ce177de37dff03561b (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-362248b4-1c64-11f1-a99c-11af91d3124e-mon-vm01, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default) 2026-03-10T09:36:35.726 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm01.service' 2026-03-10T09:36:35.767 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:36:35.767 INFO:tasks.cephadm.mon.vm01:Stopped mon.vm01 2026-03-10T09:36:35.767 INFO:tasks.cephadm.mon.vm08:Stopping mon.vm08... 2026-03-10T09:36:35.767 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm08 2026-03-10T09:36:35.985 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-362248b4-1c64-11f1-a99c-11af91d3124e@mon.vm08.service' 2026-03-10T09:36:36.021 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T09:36:36.021 INFO:tasks.cephadm.mon.vm08:Stopped mon.vm08 2026-03-10T09:36:36.021 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 362248b4-1c64-11f1-a99c-11af91d3124e --force --keep-logs 2026-03-10T09:36:36.157 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:36:56.532 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 362248b4-1c64-11f1-a99c-11af91d3124e --force --keep-logs 2026-03-10T09:36:56.654 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:37:20.954 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:37:20.982 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T09:37:21.006 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T09:37:21.006 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983/remote/vm01/crash 2026-03-10T09:37:21.007 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/crash -- . 2026-03-10T09:37:21.047 INFO:teuthology.orchestra.run.vm01.stderr:tar: /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/crash: Cannot open: No such file or directory 2026-03-10T09:37:21.047 INFO:teuthology.orchestra.run.vm01.stderr:tar: Error is not recoverable: exiting now 2026-03-10T09:37:21.048 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983/remote/vm08/crash 2026-03-10T09:37:21.048 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/crash -- . 2026-03-10T09:37:21.073 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/crash: Cannot open: No such file or directory 2026-03-10T09:37:21.073 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-10T09:37:21.073 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T09:37:21.073 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v OSD_DOWN | egrep -v CEPHADM_FAILED_DAEMON | egrep -v 'but it is still running' | egrep -v PG_DEGRADED | head -n 1 2026-03-10T09:37:21.117 INFO:tasks.cephadm:Compressing logs... 2026-03-10T09:37:21.117 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:37:21.159 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:37:21.181 INFO:teuthology.orchestra.run.vm01.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T09:37:21.181 INFO:teuthology.orchestra.run.vm01.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T09:37:21.182 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mon.vm01.log 2026-03-10T09:37:21.182 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log 2026-03-10T09:37:21.184 INFO:teuthology.orchestra.run.vm08.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T09:37:21.184 INFO:teuthology.orchestra.run.vm08.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T09:37:21.185 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-volume.log 2026-03-10T09:37:21.186 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-client.ceph-exporter.vm08.log 2026-03-10T09:37:21.186 INFO:teuthology.orchestra.run.vm08.stderr: 91.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T09:37:21.186 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mgr.vm08.pllkti.log 2026-03-10T09:37:21.186 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-client.ceph-exporter.vm08.log: 29.0% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-client.ceph-exporter.vm08.log.gz 2026-03-10T09:37:21.187 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mon.vm08.log 2026-03-10T09:37:21.187 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mon.vm01.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mgr.vm01.itvfys.log 2026-03-10T09:37:21.189 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log: 83.3% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log.gz 2026-03-10T09:37:21.189 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mgr.vm08.pllkti.log: 90.9% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mgr.vm08.pllkti.log.gz 2026-03-10T09:37:21.189 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.audit.log 2026-03-10T09:37:21.190 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mon.vm08.log: 95.7% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-volume.log.gz 2026-03-10T09:37:21.191 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log 2026-03-10T09:37:21.192 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.audit.log: 90.7% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.audit.log.gz 2026-03-10T09:37:21.193 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.cephadm.log 2026-03-10T09:37:21.193 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.audit.log 2026-03-10T09:37:21.193 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log: 82.2% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.log.gz 2026-03-10T09:37:21.195 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.0.log 2026-03-10T09:37:21.195 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.cephadm.log: 81.6% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.cephadm.log.gz 2026-03-10T09:37:21.195 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.2.log 2026-03-10T09:37:21.197 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mgr.vm01.itvfys.log: 91.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T09:37:21.199 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.cephadm.log 2026-03-10T09:37:21.201 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.0.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.4.log 2026-03-10T09:37:21.202 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.audit.log: 90.6% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.audit.log.gz 2026-03-10T09:37:21.206 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-volume.log 2026-03-10T09:37:21.206 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.cephadm.log: 83.1% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph.cephadm.log.gz 2026-03-10T09:37:21.206 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.2.log: 93.1% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.0.log.gz 2026-03-10T09:37:21.207 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.6.log 2026-03-10T09:37:21.210 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-client.ceph-exporter.vm01.log 2026-03-10T09:37:21.219 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.1.log 2026-03-10T09:37:21.222 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-client.ceph-exporter.vm01.log: 90.7% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-client.ceph-exporter.vm01.log.gz 2026-03-10T09:37:21.223 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.4.log: /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.6.log: 92.2% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mon.vm08.log.gz 2026-03-10T09:37:21.225 INFO:teuthology.orchestra.run.vm08.stderr: 93.1% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.2.log.gz 2026-03-10T09:37:21.226 INFO:teuthology.orchestra.run.vm01.stderr: 95.7% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-volume.log.gz 2026-03-10T09:37:21.226 INFO:teuthology.orchestra.run.vm08.stderr: 93.1% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.4.log.gz 2026-03-10T09:37:21.227 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.3.log 2026-03-10T09:37:21.236 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.5.log 2026-03-10T09:37:21.237 INFO:teuthology.orchestra.run.vm08.stderr: 93.3% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.6.log.gz 2026-03-10T09:37:21.239 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-10T09:37:21.239 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.065s 2026-03-10T09:37:21.239 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.087s 2026-03-10T09:37:21.239 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.019s 2026-03-10T09:37:21.247 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.7.log 2026-03-10T09:37:21.267 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.5.log: /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.7.log: 93.1% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.3.log.gz 2026-03-10T09:37:21.268 INFO:teuthology.orchestra.run.vm01.stderr: 93.6% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.1.log.gz 2026-03-10T09:37:21.276 INFO:teuthology.orchestra.run.vm01.stderr: 93.1% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.5.log.gz 2026-03-10T09:37:21.276 INFO:teuthology.orchestra.run.vm01.stderr: 93.2% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-osd.7.log.gz 2026-03-10T09:37:21.286 INFO:teuthology.orchestra.run.vm01.stderr: 90.0% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mgr.vm01.itvfys.log.gz 2026-03-10T09:37:21.287 INFO:teuthology.orchestra.run.vm01.stderr: 91.7% -- replaced with /var/log/ceph/362248b4-1c64-11f1-a99c-11af91d3124e/ceph-mon.vm01.log.gz 2026-03-10T09:37:21.288 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-10T09:37:21.288 INFO:teuthology.orchestra.run.vm01.stderr:real 0m0.117s 2026-03-10T09:37:21.288 INFO:teuthology.orchestra.run.vm01.stderr:user 0m0.205s 2026-03-10T09:37:21.288 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m0.020s 2026-03-10T09:37:21.289 INFO:tasks.cephadm:Archiving logs... 2026-03-10T09:37:21.289 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983/remote/vm01/log 2026-03-10T09:37:21.289 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T09:37:21.363 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983/remote/vm08/log 2026-03-10T09:37:21.363 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T09:37:21.392 INFO:tasks.cephadm:Removing cluster... 2026-03-10T09:37:21.393 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 362248b4-1c64-11f1-a99c-11af91d3124e --force 2026-03-10T09:37:21.530 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:37:21.619 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 362248b4-1c64-11f1-a99c-11af91d3124e --force 2026-03-10T09:37:21.747 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: 362248b4-1c64-11f1-a99c-11af91d3124e 2026-03-10T09:37:21.842 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T09:37:21.842 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T09:37:21.859 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T09:37:21.875 INFO:tasks.cephadm:Teardown complete 2026-03-10T09:37:21.875 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T09:37:21.877 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T09:37:21.877 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:37:21.902 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T09:37:21.915 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:37:21.918 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:37:21.919 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T09:37:21.919 INFO:teuthology.orchestra.run.vm01.stdout:^+ 141.98.138.220 3 6 177 28 +929us[ +911us] +/- 32ms 2026-03-10T09:37:21.919 INFO:teuthology.orchestra.run.vm01.stdout:^+ x1.ncomputers.org 2 6 177 30 +1979us[+1962us] +/- 47ms 2026-03-10T09:37:21.919 INFO:teuthology.orchestra.run.vm01.stdout:^+ ntp1.wtnet.de 2 6 177 30 +1632us[+1614us] +/- 19ms 2026-03-10T09:37:21.919 INFO:teuthology.orchestra.run.vm01.stdout:^* 193.158.22.13 1 6 177 27 -2999us[-3017us] +/- 17ms 2026-03-10T09:37:21.930 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T09:37:21.933 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T09:37:21.933 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T09:37:21.933 INFO:teuthology.orchestra.run.vm08.stdout:^+ x1.ncomputers.org 2 6 177 30 +2002us[+2706us] +/- 46ms 2026-03-10T09:37:21.934 INFO:teuthology.orchestra.run.vm08.stdout:^+ ntp1.wtnet.de 2 6 177 28 +1664us[+1664us] +/- 19ms 2026-03-10T09:37:21.934 INFO:teuthology.orchestra.run.vm08.stdout:^* 193.158.22.13 1 6 177 29 -2969us[-2264us] +/- 17ms 2026-03-10T09:37:21.934 INFO:teuthology.orchestra.run.vm08.stdout:^+ 141.98.138.220 3 6 177 30 +980us[+1685us] +/- 32ms 2026-03-10T09:37:21.934 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T09:37:21.937 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T09:37:21.937 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T09:37:21.939 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T09:37:21.942 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T09:37:21.944 INFO:teuthology.task.internal:Duration was 385.201073 seconds 2026-03-10T09:37:21.944 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T09:37:21.946 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T09:37:21.947 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T09:37:21.961 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T09:37:22.001 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:37:22.014 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T09:37:22.512 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T09:37:22.512 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-10T09:37:22.512 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T09:37:22.579 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-10T09:37:22.579 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T09:37:22.608 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T09:37:22.608 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:37:22.621 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:37:23.132 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T09:37:23.133 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:37:23.134 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T09:37:23.160 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:37:23.160 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:37:23.160 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T09:37:23.160 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:37:23.160 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-10T09:37:23.162 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T09:37:23.163 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T09:37:23.163 INFO:teuthology.orchestra.run.vm08.stderr:gzip/home/ubuntu/cephtest/archive/syslog/kern.log: -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T09:37:23.163 INFO:teuthology.orchestra.run.vm08.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T09:37:23.164 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T09:37:23.323 INFO:teuthology.orchestra.run.vm01.stderr: 98.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T09:37:23.333 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T09:37:23.335 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T09:37:23.338 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T09:37:23.338 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T09:37:23.395 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T09:37:23.425 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T09:37:23.427 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:37:23.437 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:37:23.463 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-10T09:37:23.493 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-10T09:37:23.507 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:37:23.536 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:37:23.536 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T09:37:23.564 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T09:37:23.564 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T09:37:23.567 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T09:37:23.567 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983/remote/vm01 2026-03-10T09:37:23.567 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T09:37:23.606 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/983/remote/vm08 2026-03-10T09:37:23.606 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T09:37:23.638 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T09:37:23.638 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T09:37:23.647 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T09:37:23.695 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T09:37:23.698 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T09:37:23.698 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T09:37:23.701 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T09:37:23.701 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T09:37:23.703 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T09:37:23.719 INFO:teuthology.orchestra.run.vm01.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 09:37 /home/ubuntu/cephtest 2026-03-10T09:37:23.752 INFO:teuthology.orchestra.run.vm08.stdout: 8532144 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 09:37 /home/ubuntu/cephtest 2026-03-10T09:37:23.753 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T09:37:23.759 INFO:teuthology.run:Summary data: description: orch/cephadm/osds/{0-distro/centos_9.stream 1-start 2-ops/rm-zap-add} duration: 385.20107340812683 owner: kyr success: true 2026-03-10T09:37:23.759 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T09:37:23.778 INFO:teuthology.run:pass