2026-03-09T18:06:22.893 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T18:06:22.897 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:06:22.914 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597 branch: squid description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_ca_signed_key} email: null first_in_suite: false flavor: default job_id: '597' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: use-ca-signed-key: true install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - client.0 - - host.b - mon.b - mgr.b - osd.1 - client.1 seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZcANB141zyBwO32g5Zcmua5Y9E9wy+I2TpZY4dzvP+GDW+B0YvnUberNyk/5WJh4YDl0H5KZ4vDaM0mICoChA= vm09.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDtqjGUSuxAopoTriOtFTo+c5Un/8+MOp08+nmdKBlH3Y0fdaFNZBL833HceJSaCo7Q5PcbzUmbZXAKb0UMcO/I= tasks: - install: null - cephadm: null - cephadm.shell: host.a: - "set -ex\nHOSTNAMES=$(ceph orch host ls --format json | jq -r '.[] | .hostname')\n\ for host in $HOSTNAMES; do\n # do a check-host on each host to make sure it's\ \ reachable\n ceph cephadm check-host ${host} 2> ${host}-ok.txt\n HOST_OK=$(cat\ \ ${host}-ok.txt)\n if ! grep -q \"Host looks OK\" <<< \"$HOST_OK\"; then\n\ \ printf \"Failed host check:\\n\\n$HOST_OK\"\n exit 1\n fi\ndone\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T18:06:22.914 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T18:06:22.914 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T18:06:22.914 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T18:06:22.915 INFO:teuthology.task.internal:Checking packages... 2026-03-09T18:06:22.915 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T18:06:22.915 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T18:06:22.915 INFO:teuthology.packaging:ref: None 2026-03-09T18:06:22.915 INFO:teuthology.packaging:tag: None 2026-03-09T18:06:22.915 INFO:teuthology.packaging:branch: squid 2026-03-09T18:06:22.915 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:06:22.915 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T18:06:23.546 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:06:23.547 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T18:06:23.548 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T18:06:23.548 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T18:06:23.548 INFO:teuthology.task.internal:Saving configuration 2026-03-09T18:06:23.553 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T18:06:23.554 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T18:06:23.560 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 18:05:19.831600', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZcANB141zyBwO32g5Zcmua5Y9E9wy+I2TpZY4dzvP+GDW+B0YvnUberNyk/5WJh4YDl0H5KZ4vDaM0mICoChA='} 2026-03-09T18:06:23.566 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm09.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 18:05:19.832407', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:09', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDtqjGUSuxAopoTriOtFTo+c5Un/8+MOp08+nmdKBlH3Y0fdaFNZBL833HceJSaCo7Q5PcbzUmbZXAKb0UMcO/I='} 2026-03-09T18:06:23.566 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T18:06:23.567 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0', 'client.0'] 2026-03-09T18:06:23.567 INFO:teuthology.task.internal:roles: ubuntu@vm09.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1', 'client.1'] 2026-03-09T18:06:23.567 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T18:06:23.572 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-09T18:06:23.577 DEBUG:teuthology.task.console_log:vm09 does not support IPMI; excluding 2026-03-09T18:06:23.577 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f99c04afd90>, signals=[15]) 2026-03-09T18:06:23.577 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T18:06:23.578 INFO:teuthology.task.internal:Opening connections... 2026-03-09T18:06:23.578 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-09T18:06:23.578 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:06:23.638 DEBUG:teuthology.task.internal:connecting to ubuntu@vm09.local 2026-03-09T18:06:23.638 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:06:23.699 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T18:06:23.700 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-09T18:06:23.703 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-09T18:06:23.703 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T18:06:23.747 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-09T18:06:23.748 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-09T18:06:23.753 DEBUG:teuthology.orchestra.run.vm09:> uname -m 2026-03-09T18:06:23.763 INFO:teuthology.orchestra.run.vm09.stdout:x86_64 2026-03-09T18:06:23.763 DEBUG:teuthology.orchestra.run.vm09:> cat /etc/os-release 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:NAME="Ubuntu" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_ID="22.04" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:VERSION_CODENAME=jammy 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:ID=ubuntu 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:ID_LIKE=debian 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T18:06:23.806 INFO:teuthology.orchestra.run.vm09.stdout:UBUNTU_CODENAME=jammy 2026-03-09T18:06:23.806 INFO:teuthology.lock.ops:Updating vm09.local on lock server 2026-03-09T18:06:23.810 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T18:06:23.812 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T18:06:23.813 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T18:06:23.813 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-09T18:06:23.814 DEBUG:teuthology.orchestra.run.vm09:> test '!' -e /home/ubuntu/cephtest 2026-03-09T18:06:23.850 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T18:06:23.851 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T18:06:23.851 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-09T18:06:23.857 DEBUG:teuthology.orchestra.run.vm09:> test -z $(ls -A /var/lib/ceph) 2026-03-09T18:06:23.860 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T18:06:23.894 INFO:teuthology.orchestra.run.vm09.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T18:06:23.894 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T18:06:23.901 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-09T18:06:23.903 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:06:24.322 DEBUG:teuthology.orchestra.run.vm09:> test -e /ceph-qa-ready 2026-03-09T18:06:24.325 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:06:24.560 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T18:06:24.561 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T18:06:24.561 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T18:06:24.562 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T18:06:24.565 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T18:06:24.567 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T18:06:24.568 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T18:06:24.568 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T18:06:24.605 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T18:06:24.612 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T18:06:24.613 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T18:06:24.613 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T18:06:24.650 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:06:24.650 DEBUG:teuthology.orchestra.run.vm09:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T18:06:24.653 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:06:24.653 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T18:06:24.693 DEBUG:teuthology.orchestra.run.vm09:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T18:06:24.699 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:06:24.702 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:06:24.703 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:06:24.706 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T18:06:24.707 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T18:06:24.708 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T18:06:24.708 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T18:06:24.745 DEBUG:teuthology.orchestra.run.vm09:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T18:06:24.759 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T18:06:24.761 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T18:06:24.761 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T18:06:24.792 DEBUG:teuthology.orchestra.run.vm09:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T18:06:24.806 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:06:24.839 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:06:24.883 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:06:24.883 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T18:06:24.931 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:06:24.934 DEBUG:teuthology.orchestra.run.vm09:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:06:24.978 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:06:24.978 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T18:06:25.031 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-09T18:06:25.032 DEBUG:teuthology.orchestra.run.vm09:> sudo service rsyslog restart 2026-03-09T18:06:25.087 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T18:06:25.089 INFO:teuthology.task.internal:Starting timer... 2026-03-09T18:06:25.089 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T18:06:25.091 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T18:06:25.093 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-09T18:06:25.093 INFO:teuthology.task.selinux:Excluding vm09: VMs are not yet supported 2026-03-09T18:06:25.093 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T18:06:25.093 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T18:06:25.093 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T18:06:25.093 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T18:06:25.095 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T18:06:25.095 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T18:06:25.097 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T18:06:25.631 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T18:06:25.636 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T18:06:25.636 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryu7i_adwk --limit vm03.local,vm09.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T18:08:38.358 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm09.local')] 2026-03-09T18:08:38.358 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-09T18:08:38.359 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:08:38.419 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-09T18:08:38.649 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-09T18:08:38.649 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm09.local' 2026-03-09T18:08:38.649 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm09.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T18:08:38.706 DEBUG:teuthology.orchestra.run.vm09:> true 2026-03-09T18:08:38.961 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm09.local' 2026-03-09T18:08:38.961 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T18:08:39.039 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T18:08:39.039 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T18:08:39.039 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:08:39.041 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T18:08:39.041 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:08:39.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Command line: ntpd -gq 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: ---------------------------------------------------- 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: corporation. Support and training for ntp-4 are 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: available at https://www.nwtime.org/support 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: ---------------------------------------------------- 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Command line: ntpd -gq 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: ---------------------------------------------------- 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: corporation. Support and training for ntp-4 are 2026-03-09T18:08:39.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: available at https://www.nwtime.org/support 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: ---------------------------------------------------- 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: proto: precision = 0.029 usec (-25) 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: basedate set to 2022-02-04 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: gps base set to 2022-02-06 (week 2196) 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: proto: precision = 0.030 usec (-25) 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T18:08:39.061 INFO:teuthology.orchestra.run.vm09.stderr: 9 Mar 18:08:39 ntpd[16109]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: basedate set to 2022-02-04 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: gps base set to 2022-02-06 (week 2196) 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm03.stderr: 9 Mar 18:08:39 ntpd[16105]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listen normally on 3 ens3 192.168.123.109:123 2026-03-09T18:08:39.062 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listen normally on 4 lo [::1]:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:9%2]:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:39 ntpd[16109]: Listening on routing socket on fd #22 for interface updates 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listen normally on 4 lo [::1]:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-09T18:08:39.063 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:39 ntpd[16105]: Listening on routing socket on fd #22 for interface updates 2026-03-09T18:08:40.061 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:40 ntpd[16109]: Soliciting pool server 172.104.134.72 2026-03-09T18:08:40.062 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:40 ntpd[16105]: Soliciting pool server 158.180.28.150 2026-03-09T18:08:41.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:41 ntpd[16109]: Soliciting pool server 141.84.43.73 2026-03-09T18:08:41.060 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:41 ntpd[16109]: Soliciting pool server 195.201.125.53 2026-03-09T18:08:41.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:41 ntpd[16105]: Soliciting pool server 172.104.134.72 2026-03-09T18:08:41.061 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:41 ntpd[16105]: Soliciting pool server 116.203.244.102 2026-03-09T18:08:42.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:42 ntpd[16109]: Soliciting pool server 129.70.132.32 2026-03-09T18:08:42.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:42 ntpd[16109]: Soliciting pool server 185.168.228.58 2026-03-09T18:08:42.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:42 ntpd[16109]: Soliciting pool server 158.101.188.125 2026-03-09T18:08:42.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:42 ntpd[16105]: Soliciting pool server 195.201.125.53 2026-03-09T18:08:42.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:42 ntpd[16105]: Soliciting pool server 141.84.43.73 2026-03-09T18:08:42.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:42 ntpd[16105]: Soliciting pool server 144.91.126.59 2026-03-09T18:08:43.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:43 ntpd[16109]: Soliciting pool server 78.46.87.46 2026-03-09T18:08:43.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:43 ntpd[16109]: Soliciting pool server 93.177.65.20 2026-03-09T18:08:43.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:43 ntpd[16109]: Soliciting pool server 158.180.28.150 2026-03-09T18:08:43.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:43 ntpd[16109]: Soliciting pool server 176.9.44.212 2026-03-09T18:08:43.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:43 ntpd[16105]: Soliciting pool server 158.101.188.125 2026-03-09T18:08:43.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:43 ntpd[16105]: Soliciting pool server 129.70.132.32 2026-03-09T18:08:43.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:43 ntpd[16105]: Soliciting pool server 185.168.228.58 2026-03-09T18:08:43.060 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:43 ntpd[16105]: Soliciting pool server 178.215.228.24 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:44 ntpd[16105]: Soliciting pool server 176.9.44.212 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:44 ntpd[16105]: Soliciting pool server 78.46.87.46 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:44 ntpd[16105]: Soliciting pool server 93.177.65.20 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:44 ntpd[16109]: Soliciting pool server 18.192.244.117 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:44 ntpd[16109]: Soliciting pool server 144.91.126.59 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:44 ntpd[16109]: Soliciting pool server 116.203.244.102 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:44 ntpd[16109]: Soliciting pool server 185.125.190.57 2026-03-09T18:08:44.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:44 ntpd[16105]: Soliciting pool server 185.125.190.58 2026-03-09T18:08:45.058 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:45 ntpd[16109]: Soliciting pool server 185.125.190.56 2026-03-09T18:08:45.058 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:45 ntpd[16109]: Soliciting pool server 144.76.59.37 2026-03-09T18:08:45.058 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:45 ntpd[16109]: Soliciting pool server 2003:a:42b:e400::3 2026-03-09T18:08:45.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:45 ntpd[16105]: Soliciting pool server 185.125.190.57 2026-03-09T18:08:45.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:45 ntpd[16105]: Soliciting pool server 18.192.244.117 2026-03-09T18:08:45.059 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:45 ntpd[16105]: Soliciting pool server 2001:67c:dac::1 2026-03-09T18:08:47.079 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 18:08:47 ntpd[16105]: ntpd: time slew +0.000265 s 2026-03-09T18:08:47.079 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew +0.000265s 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:47.099 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:50.081 INFO:teuthology.orchestra.run.vm09.stdout: 9 Mar 18:08:50 ntpd[16109]: ntpd: time slew +0.004952 s 2026-03-09T18:08:50.081 INFO:teuthology.orchestra.run.vm09.stdout:ntpd: time slew +0.004952s 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:50.101 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:08:50.101 INFO:teuthology.run_tasks:Running task install... 2026-03-09T18:08:50.104 DEBUG:teuthology.task.install:project ceph 2026-03-09T18:08:50.104 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T18:08:50.104 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T18:08:50.104 INFO:teuthology.task.install:Using flavor: default 2026-03-09T18:08:50.106 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T18:08:50.106 INFO:teuthology.task.install:extra packages: [] 2026-03-09T18:08:50.106 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-09T18:08:50.106 DEBUG:teuthology.orchestra.run.vm09:> sudo apt-key list | grep Ceph 2026-03-09T18:08:50.149 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T18:08:50.172 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T18:08:50.172 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T18:08:50.172 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T18:08:50.172 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T18:08:50.172 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:08:50.210 INFO:teuthology.orchestra.run.vm09.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T18:08:50.211 INFO:teuthology.orchestra.run.vm09.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T18:08:50.211 INFO:teuthology.orchestra.run.vm09.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T18:08:50.211 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T18:08:50.211 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T18:08:50.211 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:08:50.796 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T18:08:50.796 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:08:50.829 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T18:08:50.829 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:08:51.313 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:08:51.314 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T18:08:51.320 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:08:51.320 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T18:08:51.321 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-09T18:08:51.329 DEBUG:teuthology.orchestra.run.vm09:> sudo apt-get update 2026-03-09T18:08:51.517 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T18:08:51.626 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T18:08:51.641 INFO:teuthology.orchestra.run.vm09.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T18:08:51.641 INFO:teuthology.orchestra.run.vm09.stdout:Hit:2 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T18:08:51.661 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T18:08:51.674 INFO:teuthology.orchestra.run.vm09.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T18:08:51.697 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T18:08:51.711 INFO:teuthology.orchestra.run.vm09.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T18:08:52.007 INFO:teuthology.orchestra.run.vm09.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T18:08:52.014 INFO:teuthology.orchestra.run.vm03.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T18:08:52.119 INFO:teuthology.orchestra.run.vm09.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T18:08:52.128 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T18:08:52.232 INFO:teuthology.orchestra.run.vm09.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T18:08:52.243 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T18:08:52.345 INFO:teuthology.orchestra.run.vm09.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T18:08:52.357 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T18:08:52.419 INFO:teuthology.orchestra.run.vm09.stdout:Fetched 25.8 kB in 1s (27.7 kB/s) 2026-03-09T18:08:52.436 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 25.8 kB in 1s (27.0 kB/s) 2026-03-09T18:08:53.148 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:08:53.159 DEBUG:teuthology.orchestra.run.vm09:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:08:53.196 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:08:53.201 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:08:53.208 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:08:53.241 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:08:53.386 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:08:53.386 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:08:53.433 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:08:53.434 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout:The following additional packages will be installed: 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T18:08:53.560 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout:Suggested packages: 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: smart-notifier mailx | mailutils 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout:Recommended packages: 2026-03-09T18:08:53.561 INFO:teuthology.orchestra.run.vm09.stdout: btrfs-tools 2026-03-09T18:08:53.594 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout:The following NEW packages will be installed: 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: socat unzip xmlstarlet zip 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be upgraded: 2026-03-09T18:08:53.636 INFO:teuthology.orchestra.run.vm09.stdout: librados2 librbd1 2026-03-09T18:08:53.637 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:08:53.637 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: smart-notifier mailx | mailutils 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T18:08:53.638 INFO:teuthology.orchestra.run.vm03.stdout: socat unzip xmlstarlet zip 2026-03-09T18:08:53.639 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-09T18:08:53.639 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-09T18:08:53.693 INFO:teuthology.orchestra.run.vm09.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:08:53.693 INFO:teuthology.orchestra.run.vm09.stdout:Need to get 178 MB of archives. 2026-03-09T18:08:53.693 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T18:08:53.693 INFO:teuthology.orchestra.run.vm09.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T18:08:53.730 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:08:53.730 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 178 MB of archives. 2026-03-09T18:08:53.730 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T18:08:53.730 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T18:08:53.731 INFO:teuthology.orchestra.run.vm09.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T18:08:53.732 INFO:teuthology.orchestra.run.vm09.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T18:08:53.740 INFO:teuthology.orchestra.run.vm09.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T18:08:53.765 INFO:teuthology.orchestra.run.vm09.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T18:08:53.767 INFO:teuthology.orchestra.run.vm09.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T18:08:53.774 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T18:08:53.783 INFO:teuthology.orchestra.run.vm09.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T18:08:53.783 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T18:08:53.784 INFO:teuthology.orchestra.run.vm09.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T18:08:53.785 INFO:teuthology.orchestra.run.vm09.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T18:08:53.785 INFO:teuthology.orchestra.run.vm09.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T18:08:53.785 INFO:teuthology.orchestra.run.vm09.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T18:08:53.787 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T18:08:53.788 INFO:teuthology.orchestra.run.vm09.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T18:08:53.789 INFO:teuthology.orchestra.run.vm09.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T18:08:53.790 INFO:teuthology.orchestra.run.vm09.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T18:08:53.791 INFO:teuthology.orchestra.run.vm09.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T18:08:53.796 INFO:teuthology.orchestra.run.vm09.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T18:08:53.798 INFO:teuthology.orchestra.run.vm09.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T18:08:53.799 INFO:teuthology.orchestra.run.vm09.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T18:08:53.800 INFO:teuthology.orchestra.run.vm09.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T18:08:53.801 INFO:teuthology.orchestra.run.vm09.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T18:08:53.804 INFO:teuthology.orchestra.run.vm09.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T18:08:53.805 INFO:teuthology.orchestra.run.vm09.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T18:08:53.806 INFO:teuthology.orchestra.run.vm09.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T18:08:53.806 INFO:teuthology.orchestra.run.vm09.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T18:08:53.806 INFO:teuthology.orchestra.run.vm09.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T18:08:53.812 INFO:teuthology.orchestra.run.vm09.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T18:08:53.812 INFO:teuthology.orchestra.run.vm09.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T18:08:53.813 INFO:teuthology.orchestra.run.vm09.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T18:08:53.815 INFO:teuthology.orchestra.run.vm09.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T18:08:53.815 INFO:teuthology.orchestra.run.vm09.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T18:08:53.823 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T18:08:53.825 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T18:08:53.825 INFO:teuthology.orchestra.run.vm09.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T18:08:53.826 INFO:teuthology.orchestra.run.vm09.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T18:08:53.826 INFO:teuthology.orchestra.run.vm09.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T18:08:53.827 INFO:teuthology.orchestra.run.vm09.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T18:08:53.841 INFO:teuthology.orchestra.run.vm09.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T18:08:53.844 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T18:08:53.845 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T18:08:53.846 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T18:08:53.847 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T18:08:53.847 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T18:08:53.848 INFO:teuthology.orchestra.run.vm09.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T18:08:53.849 INFO:teuthology.orchestra.run.vm09.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T18:08:53.850 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T18:08:53.851 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T18:08:53.852 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T18:08:53.852 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T18:08:53.853 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T18:08:53.858 INFO:teuthology.orchestra.run.vm09.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T18:08:53.858 INFO:teuthology.orchestra.run.vm09.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T18:08:53.858 INFO:teuthology.orchestra.run.vm09.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T18:08:53.859 INFO:teuthology.orchestra.run.vm09.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T18:08:53.859 INFO:teuthology.orchestra.run.vm09.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T18:08:53.861 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T18:08:53.861 INFO:teuthology.orchestra.run.vm09.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T18:08:53.861 INFO:teuthology.orchestra.run.vm09.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T18:08:53.862 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T18:08:53.863 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T18:08:53.864 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T18:08:53.864 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T18:08:53.865 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T18:08:53.865 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T18:08:53.865 INFO:teuthology.orchestra.run.vm09.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T18:08:53.865 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T18:08:53.865 INFO:teuthology.orchestra.run.vm09.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T18:08:53.865 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T18:08:53.871 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T18:08:53.871 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T18:08:53.873 INFO:teuthology.orchestra.run.vm09.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T18:08:53.875 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T18:08:53.880 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T18:08:53.880 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T18:08:53.880 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T18:08:53.900 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T18:08:53.902 INFO:teuthology.orchestra.run.vm09.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T18:08:53.903 INFO:teuthology.orchestra.run.vm09.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T18:08:53.904 INFO:teuthology.orchestra.run.vm09.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T18:08:53.907 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T18:08:53.908 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T18:08:53.913 INFO:teuthology.orchestra.run.vm09.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T18:08:53.913 INFO:teuthology.orchestra.run.vm09.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T18:08:53.914 INFO:teuthology.orchestra.run.vm09.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T18:08:53.914 INFO:teuthology.orchestra.run.vm09.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T18:08:53.915 INFO:teuthology.orchestra.run.vm09.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T18:08:53.915 INFO:teuthology.orchestra.run.vm09.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T18:08:53.917 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T18:08:53.917 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T18:08:53.919 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T18:08:53.924 INFO:teuthology.orchestra.run.vm09.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T18:08:53.925 INFO:teuthology.orchestra.run.vm09.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T18:08:53.929 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T18:08:53.929 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T18:08:53.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T18:08:53.930 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T18:08:53.931 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T18:08:53.932 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T18:08:53.932 INFO:teuthology.orchestra.run.vm09.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T18:08:53.933 INFO:teuthology.orchestra.run.vm09.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T18:08:53.933 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T18:08:53.936 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T18:08:53.936 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T18:08:53.938 INFO:teuthology.orchestra.run.vm09.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T18:08:53.940 INFO:teuthology.orchestra.run.vm09.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T18:08:53.941 INFO:teuthology.orchestra.run.vm09.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T18:08:53.941 INFO:teuthology.orchestra.run.vm09.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T18:08:53.945 INFO:teuthology.orchestra.run.vm09.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T18:08:53.946 INFO:teuthology.orchestra.run.vm09.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T18:08:53.946 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T18:08:53.971 INFO:teuthology.orchestra.run.vm09.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T18:08:53.971 INFO:teuthology.orchestra.run.vm09.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T18:08:53.972 INFO:teuthology.orchestra.run.vm09.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T18:08:53.972 INFO:teuthology.orchestra.run.vm09.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T18:08:53.973 INFO:teuthology.orchestra.run.vm09.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T18:08:53.973 INFO:teuthology.orchestra.run.vm09.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T18:08:53.987 INFO:teuthology.orchestra.run.vm09.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T18:08:53.987 INFO:teuthology.orchestra.run.vm09.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T18:08:53.987 INFO:teuthology.orchestra.run.vm09.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T18:08:53.993 INFO:teuthology.orchestra.run.vm09.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T18:08:53.993 INFO:teuthology.orchestra.run.vm09.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T18:08:53.999 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T18:08:54.000 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T18:08:54.001 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T18:08:54.029 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T18:08:54.029 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T18:08:54.030 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T18:08:54.030 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T18:08:54.031 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T18:08:54.031 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T18:08:54.032 INFO:teuthology.orchestra.run.vm09.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T18:08:54.034 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T18:08:54.035 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T18:08:54.036 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T18:08:54.037 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T18:08:54.044 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T18:08:54.047 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T18:08:54.047 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T18:08:54.048 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T18:08:54.053 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T18:08:54.053 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T18:08:54.057 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T18:08:54.057 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T18:08:54.058 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T18:08:54.058 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T18:08:54.063 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T18:08:54.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T18:08:54.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T18:08:54.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T18:08:54.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T18:08:54.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T18:08:54.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T18:08:54.090 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T18:08:54.223 INFO:teuthology.orchestra.run.vm09.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T18:08:54.600 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T18:08:55.182 INFO:teuthology.orchestra.run.vm09.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T18:08:55.306 INFO:teuthology.orchestra.run.vm09.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T18:08:55.318 INFO:teuthology.orchestra.run.vm09.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T18:08:55.322 INFO:teuthology.orchestra.run.vm09.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T18:08:55.323 INFO:teuthology.orchestra.run.vm09.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T18:08:55.326 INFO:teuthology.orchestra.run.vm09.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T18:08:55.327 INFO:teuthology.orchestra.run.vm09.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T18:08:55.337 INFO:teuthology.orchestra.run.vm09.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T18:08:55.661 INFO:teuthology.orchestra.run.vm09.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T18:08:55.661 INFO:teuthology.orchestra.run.vm09.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T18:08:55.665 INFO:teuthology.orchestra.run.vm09.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T18:08:56.795 INFO:teuthology.orchestra.run.vm09.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T18:08:57.019 INFO:teuthology.orchestra.run.vm09.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T18:08:57.028 INFO:teuthology.orchestra.run.vm09.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T18:08:57.032 INFO:teuthology.orchestra.run.vm09.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T18:08:57.042 INFO:teuthology.orchestra.run.vm09.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T18:08:57.288 INFO:teuthology.orchestra.run.vm09.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T18:08:58.166 INFO:teuthology.orchestra.run.vm09.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T18:08:58.166 INFO:teuthology.orchestra.run.vm09.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T18:08:58.236 INFO:teuthology.orchestra.run.vm09.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T18:08:58.338 INFO:teuthology.orchestra.run.vm09.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T18:08:58.381 INFO:teuthology.orchestra.run.vm09.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T18:08:58.390 INFO:teuthology.orchestra.run.vm09.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T18:08:58.464 INFO:teuthology.orchestra.run.vm09.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T18:08:58.783 INFO:teuthology.orchestra.run.vm09.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T18:08:58.783 INFO:teuthology.orchestra.run.vm09.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T18:09:00.895 INFO:teuthology.orchestra.run.vm09.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T18:09:00.957 INFO:teuthology.orchestra.run.vm09.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T18:09:00.957 INFO:teuthology.orchestra.run.vm09.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T18:09:01.469 INFO:teuthology.orchestra.run.vm09.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T18:09:01.771 INFO:teuthology.orchestra.run.vm09.stdout:Fetched 178 MB in 8s (22.6 MB/s) 2026-03-09T18:09:01.832 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T18:09:01.857 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T18:09:01.858 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T18:09:01.860 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:09:01.879 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T18:09:01.884 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T18:09:01.885 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:09:01.901 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T18:09:01.907 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T18:09:01.907 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:09:01.928 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T18:09:01.935 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T18:09:01.939 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:02.037 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T18:09:02.040 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T18:09:02.041 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:02.061 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T18:09:02.066 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T18:09:02.067 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:02.091 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T18:09:02.096 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T18:09:02.097 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:09:02.120 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.123 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T18:09:02.197 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.199 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T18:09:02.263 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libnbd0. 2026-03-09T18:09:02.269 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T18:09:02.270 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T18:09:02.284 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T18:09:02.289 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.290 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.316 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rados. 2026-03-09T18:09:02.321 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.322 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.340 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T18:09:02.344 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:02.345 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.358 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T18:09:02.363 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.364 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.379 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T18:09:02.384 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:02.385 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.450 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T18:09:02.456 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T18:09:02.456 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:09:02.474 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T18:09:02.480 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T18:09:02.480 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T18:09:02.496 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T18:09:02.501 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.502 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.525 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T18:09:02.530 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T18:09:02.531 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:09:02.552 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T18:09:02.557 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T18:09:02.558 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:09:02.576 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T18:09:02.581 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T18:09:02.582 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:09:02.601 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua5.1. 2026-03-09T18:09:02.607 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T18:09:02.608 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:09:02.628 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua-any. 2026-03-09T18:09:02.633 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T18:09:02.634 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T18:09:02.649 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package zip. 2026-03-09T18:09:02.653 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T18:09:02.654 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T18:09:02.673 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package unzip. 2026-03-09T18:09:02.678 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T18:09:02.679 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:09:02.700 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package luarocks. 2026-03-09T18:09:02.705 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T18:09:02.705 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:09:02.758 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package librgw2. 2026-03-09T18:09:02.764 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.764 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.834 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T18:09:02.885 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T18:09:02.890 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.891 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.908 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T18:09:02.913 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T18:09:02.914 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:09:02.931 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T18:09:02.937 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.938 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:02.961 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-common. 2026-03-09T18:09:02.967 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:02.968 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:03.361 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-base. 2026-03-09T18:09:03.364 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:03.368 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:03.513 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T18:09:03.518 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T18:09:03.519 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:09:03.535 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T18:09:03.542 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T18:09:03.543 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:09:03.563 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T18:09:03.570 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T18:09:03.572 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:09:03.589 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T18:09:03.594 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T18:09:03.596 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:09:03.612 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T18:09:03.618 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T18:09:03.619 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:09:03.635 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T18:09:03.640 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T18:09:03.641 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T18:09:03.659 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-portend. 2026-03-09T18:09:03.666 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T18:09:03.667 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T18:09:03.681 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T18:09:03.687 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T18:09:03.688 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T18:09:03.703 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T18:09:03.709 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T18:09:03.711 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:09:03.740 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T18:09:03.746 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T18:09:03.797 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T18:09:03.837 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T18:09:03.838 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T18:09:03.839 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T18:09:03.855 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-mako. 2026-03-09T18:09:03.860 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T18:09:03.861 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:09:03.882 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T18:09:03.887 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T18:09:03.888 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:09:03.905 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T18:09:03.910 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T18:09:03.912 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:09:03.928 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-webob. 2026-03-09T18:09:03.934 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T18:09:03.935 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:09:03.956 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T18:09:03.962 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T18:09:03.965 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:09:03.984 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T18:09:03.990 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T18:09:03.992 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:09:04.009 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-paste. 2026-03-09T18:09:04.015 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T18:09:04.016 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:09:04.051 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T18:09:04.056 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T18:09:04.058 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:09:04.072 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T18:09:04.078 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T18:09:04.079 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:09:04.097 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T18:09:04.102 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T18:09:04.103 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T18:09:04.121 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T18:09:04.128 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T18:09:04.130 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:09:04.161 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T18:09:04.167 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T18:09:04.168 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:09:04.192 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T18:09:04.200 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:04.201 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.244 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T18:09:04.250 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.251 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.358 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T18:09:04.365 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.367 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.397 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T18:09:04.402 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.403 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.500 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T18:09:04.505 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T18:09:04.507 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:09:04.527 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T18:09:04.533 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.534 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.653 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T18:09:04.854 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph. 2026-03-09T18:09:04.860 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.861 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.877 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T18:09:04.883 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.884 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.918 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T18:09:04.924 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.925 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.972 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package cephadm. 2026-03-09T18:09:04.979 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:04.980 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:04.993 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T18:09:05.000 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T18:09:05.006 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T18:09:05.007 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:05.035 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T18:09:05.041 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:05.043 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:05.067 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T18:09:05.073 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T18:09:05.074 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T18:09:05.091 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-routes. 2026-03-09T18:09:05.097 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T18:09:05.098 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:09:05.107 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T18:09:05.107 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T18:09:05.127 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T18:09:05.132 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:05.133 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:05.219 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T18:09:05.219 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T18:09:05.332 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T18:09:05.517 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T18:09:05.522 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T18:09:05.523 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:09:05.587 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T18:09:05.592 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T18:09:05.593 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:09:05.630 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T18:09:05.636 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T18:09:05.637 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:09:05.653 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T18:09:05.659 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T18:09:05.660 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:09:05.785 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T18:09:05.791 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:05.792 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:06.064 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T18:09:06.070 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T18:09:06.071 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T18:09:06.086 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T18:09:06.091 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T18:09:06.093 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T18:09:06.113 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T18:09:06.119 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T18:09:06.120 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T18:09:06.140 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T18:09:06.146 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T18:09:06.147 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:09:06.165 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T18:09:06.171 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T18:09:06.172 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T18:09:06.195 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T18:09:06.201 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T18:09:06.215 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:09:06.370 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T18:09:06.376 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:06.377 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:06.392 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T18:09:06.398 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T18:09:06.399 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:09:06.418 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T18:09:06.424 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T18:09:06.425 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:06.440 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package jq. 2026-03-09T18:09:06.446 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T18:09:06.447 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:06.462 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package socat. 2026-03-09T18:09:06.468 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T18:09:06.469 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:09:06.496 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T18:09:06.502 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T18:09:06.503 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:09:06.551 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-test. 2026-03-09T18:09:06.556 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:06.557 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:06.923 INFO:teuthology.orchestra.run.vm03.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T18:09:06.923 INFO:teuthology.orchestra.run.vm03.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T18:09:07.035 INFO:teuthology.orchestra.run.vm03.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T18:09:07.421 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T18:09:07.427 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:07.428 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:07.456 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T18:09:07.462 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:07.463 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:07.480 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T18:09:07.486 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T18:09:07.487 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:09:07.566 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T18:09:07.573 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T18:09:07.584 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:09:07.616 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T18:09:07.622 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T18:09:07.623 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:09:07.666 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package pkg-config. 2026-03-09T18:09:07.672 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T18:09:07.673 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:09:07.690 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T18:09:07.696 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T18:09:07.697 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:07.743 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T18:09:07.748 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T18:09:07.749 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T18:09:07.765 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T18:09:07.771 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T18:09:07.772 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T18:09:07.792 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T18:09:07.798 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T18:09:07.799 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T18:09:07.820 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T18:09:07.826 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T18:09:07.826 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T18:09:07.849 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-py. 2026-03-09T18:09:07.855 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T18:09:07.855 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T18:09:07.885 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T18:09:07.891 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T18:09:07.892 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T18:09:07.956 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T18:09:07.962 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T18:09:07.963 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:09:07.978 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-toml. 2026-03-09T18:09:07.984 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T18:09:07.985 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T18:09:08.001 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T18:09:08.006 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T18:09:08.007 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T18:09:08.036 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T18:09:08.042 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T18:09:08.042 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:09:08.065 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T18:09:08.070 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T18:09:08.071 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:09:08.185 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package radosgw. 2026-03-09T18:09:08.191 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:08.192 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:08.475 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T18:09:08.482 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:08.483 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:08.552 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package smartmontools. 2026-03-09T18:09:08.557 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T18:09:08.565 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:09:08.606 INFO:teuthology.orchestra.run.vm09.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:09:08.825 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T18:09:08.825 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T18:09:09.156 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T18:09:09.228 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:09:09.230 INFO:teuthology.orchestra.run.vm09.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:09:09.293 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T18:09:09.489 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T18:09:09.872 INFO:teuthology.orchestra.run.vm09.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T18:09:09.879 INFO:teuthology.orchestra.run.vm09.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T18:09:09.881 INFO:teuthology.orchestra.run.vm09.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:09.927 INFO:teuthology.orchestra.run.vm09.stdout:Adding system user cephadm....done 2026-03-09T18:09:09.935 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:09:10.018 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:09:10.086 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:10.088 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:09:10.106 INFO:teuthology.orchestra.run.vm03.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T18:09:10.159 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T18:09:10.231 INFO:teuthology.orchestra.run.vm09.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:09:10.234 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T18:09:10.328 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:09:10.455 INFO:teuthology.orchestra.run.vm03.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T18:09:10.456 INFO:teuthology.orchestra.run.vm03.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T18:09:10.457 INFO:teuthology.orchestra.run.vm03.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T18:09:10.459 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T18:09:10.543 INFO:teuthology.orchestra.run.vm09.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:09:10.552 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:09:10.567 INFO:teuthology.orchestra.run.vm03.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T18:09:10.625 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:09:10.697 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:10.771 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:09:10.773 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T18:09:10.775 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:09:10.778 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:09:10.780 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:09:10.783 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:09:10.787 INFO:teuthology.orchestra.run.vm09.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T18:09:10.789 INFO:teuthology.orchestra.run.vm09.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T18:09:10.791 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:09:10.794 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T18:09:10.925 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T18:09:11.009 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:09:11.020 INFO:teuthology.orchestra.run.vm03.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T18:09:11.095 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:09:11.179 INFO:teuthology.orchestra.run.vm09.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T18:09:11.182 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T18:09:11.463 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:09:11.534 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:09:11.536 INFO:teuthology.orchestra.run.vm09.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:09:11.538 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:09:11.638 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:11.777 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:09:11.905 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:09:11.990 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:09:12.107 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:09:12.170 INFO:teuthology.orchestra.run.vm09.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:09:12.172 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:12.258 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:09:12.380 INFO:teuthology.orchestra.run.vm03.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T18:09:12.380 INFO:teuthology.orchestra.run.vm03.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T18:09:12.390 INFO:teuthology.orchestra.run.vm03.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T18:09:12.508 INFO:teuthology.orchestra.run.vm03.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T18:09:12.605 INFO:teuthology.orchestra.run.vm03.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T18:09:12.607 INFO:teuthology.orchestra.run.vm03.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T18:09:12.722 INFO:teuthology.orchestra.run.vm03.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T18:09:12.833 INFO:teuthology.orchestra.run.vm09.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:09:12.858 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:12.862 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T18:09:12.931 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:09:12.934 INFO:teuthology.orchestra.run.vm09.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:09:12.936 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T18:09:13.008 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T18:09:13.072 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:13.074 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T18:09:13.078 INFO:teuthology.orchestra.run.vm03.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T18:09:13.079 INFO:teuthology.orchestra.run.vm03.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T18:09:13.170 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:09:13.260 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T18:09:13.331 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T18:09:13.397 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:09:13.461 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T18:09:13.619 INFO:teuthology.orchestra.run.vm09.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:09:13.838 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T18:09:13.928 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:09:13.930 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:09:14.002 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:09:14.085 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:09:14.184 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:09:14.256 INFO:teuthology.orchestra.run.vm09.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:09:14.258 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:09:14.260 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:14.263 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T18:09:14.405 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:09:14.477 INFO:teuthology.orchestra.run.vm09.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T18:09:14.479 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T18:09:14.545 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:14.546 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T18:09:14.621 INFO:teuthology.orchestra.run.vm09.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:14.623 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T18:09:14.699 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:09:14.843 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T18:09:14.929 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:09:15.039 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:09:15.041 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.043 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.045 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:09:15.152 INFO:teuthology.orchestra.run.vm03.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T18:09:15.153 INFO:teuthology.orchestra.run.vm03.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T18:09:15.153 INFO:teuthology.orchestra.run.vm03.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T18:09:15.644 INFO:teuthology.orchestra.run.vm09.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:09:15.650 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.653 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.655 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.657 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.659 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:15.678 INFO:teuthology.orchestra.run.vm03.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T18:09:15.730 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T18:09:15.730 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T18:09:15.995 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 178 MB in 22s (8081 kB/s) 2026-03-09T18:09:16.157 INFO:teuthology.orchestra.run.vm09.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.160 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.162 INFO:teuthology.orchestra.run.vm09.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.162 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T18:09:16.164 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.166 INFO:teuthology.orchestra.run.vm09.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.168 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.170 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.172 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.194 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T18:09:16.197 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T18:09:16.198 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:09:16.205 INFO:teuthology.orchestra.run.vm09.stdout:Adding group ceph....done 2026-03-09T18:09:16.220 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T18:09:16.225 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T18:09:16.227 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:09:16.238 INFO:teuthology.orchestra.run.vm09.stdout:Adding system user ceph....done 2026-03-09T18:09:16.244 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T18:09:16.245 INFO:teuthology.orchestra.run.vm09.stdout:Setting system user ceph properties....done 2026-03-09T18:09:16.250 INFO:teuthology.orchestra.run.vm09.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T18:09:16.250 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T18:09:16.251 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:09:16.272 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T18:09:16.278 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T18:09:16.282 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:16.317 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T18:09:16.321 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T18:09:16.326 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T18:09:16.327 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:16.347 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T18:09:16.353 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T18:09:16.354 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:16.379 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T18:09:16.385 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T18:09:16.386 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:09:16.410 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:16.412 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T18:09:16.490 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:16.493 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T18:09:16.545 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T18:09:16.565 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-09T18:09:16.568 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T18:09:16.570 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T18:09:16.589 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T18:09:16.592 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:16.593 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.623 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-09T18:09:16.628 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:16.629 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.648 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T18:09:16.653 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:16.654 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.669 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T18:09:16.676 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:16.677 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.695 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T18:09:16.701 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:16.702 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.723 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T18:09:16.728 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T18:09:16.731 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:09:16.748 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T18:09:16.754 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T18:09:16.755 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T18:09:16.774 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T18:09:16.781 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:16.782 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.859 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T18:09:16.865 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T18:09:16.866 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:09:16.884 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.886 INFO:teuthology.orchestra.run.vm09.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:16.889 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T18:09:16.895 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T18:09:16.896 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:09:16.917 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T18:09:16.923 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T18:09:16.924 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:09:16.945 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua5.1. 2026-03-09T18:09:16.950 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T18:09:16.951 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:09:16.972 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-any. 2026-03-09T18:09:16.979 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T18:09:16.981 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T18:09:16.997 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package zip. 2026-03-09T18:09:17.002 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T18:09:17.003 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T18:09:17.021 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package unzip. 2026-03-09T18:09:17.026 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T18:09:17.027 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:09:17.048 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package luarocks. 2026-03-09T18:09:17.053 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T18:09:17.054 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:09:17.102 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-09T18:09:17.107 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:17.108 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:17.142 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:09:17.142 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:09:17.230 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T18:09:17.232 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:17.233 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:17.253 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T18:09:17.256 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T18:09:17.257 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:09:17.309 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T18:09:17.315 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:17.315 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:17.339 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-09T18:09:17.344 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:17.345 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:17.502 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:17.593 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T18:09:17.733 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-09T18:09:17.739 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:17.744 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:17.848 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T18:09:17.853 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T18:09:17.855 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:09:17.871 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T18:09:17.877 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T18:09:17.877 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:09:17.896 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T18:09:17.902 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T18:09:17.903 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:09:17.919 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T18:09:17.924 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T18:09:17.925 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:09:17.942 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T18:09:17.948 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T18:09:17.998 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:09:17.998 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.013 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T18:09:18.019 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T18:09:18.019 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T18:09:18.035 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-09T18:09:18.041 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T18:09:18.042 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T18:09:18.057 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T18:09:18.063 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T18:09:18.063 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T18:09:18.063 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T18:09:18.064 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T18:09:18.080 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T18:09:18.085 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T18:09:18.086 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:09:18.115 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T18:09:18.120 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T18:09:18.123 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T18:09:18.140 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T18:09:18.145 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T18:09:18.146 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T18:09:18.160 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-mako. 2026-03-09T18:09:18.165 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T18:09:18.166 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:09:18.184 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T18:09:18.189 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T18:09:18.190 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:09:18.204 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T18:09:18.209 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T18:09:18.210 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:09:18.227 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-09T18:09:18.233 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T18:09:18.233 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:09:18.256 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T18:09:18.262 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T18:09:18.265 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:09:18.284 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T18:09:18.290 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T18:09:18.291 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:09:18.308 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-paste. 2026-03-09T18:09:18.314 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T18:09:18.315 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:09:18.348 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T18:09:18.353 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T18:09:18.354 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:09:18.368 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T18:09:18.375 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T18:09:18.376 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:09:18.385 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.391 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T18:09:18.397 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T18:09:18.398 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T18:09:18.415 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T18:09:18.420 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T18:09:18.421 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:09:18.449 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T18:09:18.449 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T18:09:18.451 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T18:09:18.456 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T18:09:18.457 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:09:18.479 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T18:09:18.485 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:18.485 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.524 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T18:09:18.530 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:18.531 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.549 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T18:09:18.554 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:18.564 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.596 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T18:09:18.601 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:18.602 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.700 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T18:09:18.706 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T18:09:18.707 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:09:18.869 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.915 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T18:09:18.921 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:18.922 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:18.984 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T18:09:18.984 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T18:09:19.227 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-09T18:09:19.232 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:19.233 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.247 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T18:09:19.253 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:19.253 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.285 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T18:09:19.290 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:19.290 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.338 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-09T18:09:19.344 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:19.345 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.347 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.349 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.363 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.364 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T18:09:19.371 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T18:09:19.371 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:19.402 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T18:09:19.408 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:19.409 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.427 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T18:09:19.427 INFO:teuthology.orchestra.run.vm09.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T18:09:19.435 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T18:09:19.441 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T18:09:19.442 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T18:09:19.456 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-09T18:09:19.462 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T18:09:19.463 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:09:19.489 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T18:09:19.493 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:19.494 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.815 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.855 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.857 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.868 INFO:teuthology.orchestra.run.vm09.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:19.873 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T18:09:19.879 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T18:09:19.880 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:09:19.943 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T18:09:19.949 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T18:09:19.950 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:09:19.985 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T18:09:19.985 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T18:09:19.990 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T18:09:19.991 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:09:19.992 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:09:20.006 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T18:09:20.008 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:09:20.010 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T18:09:20.011 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:09:20.099 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T18:09:20.131 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T18:09:20.136 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:20.137 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:20.409 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T18:09:20.416 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T18:09:20.416 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T18:09:20.437 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T18:09:20.441 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:20.441 INFO:teuthology.orchestra.run.vm09.stdout:Running kernel seems to be up-to-date. 2026-03-09T18:09:20.441 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:20.441 INFO:teuthology.orchestra.run.vm09.stdout:Services to be restarted: 2026-03-09T18:09:20.443 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T18:09:20.444 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T18:09:20.447 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart packagekit.service 2026-03-09T18:09:20.449 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout:Service restarts being deferred: 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart unattended-upgrades.service 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout:No containers need to be restarted. 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout:No user sessions are running outdated binaries. 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:20.450 INFO:teuthology.orchestra.run.vm09.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T18:09:20.464 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T18:09:20.471 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T18:09:20.472 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T18:09:20.491 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T18:09:20.496 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T18:09:20.497 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:09:20.512 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T18:09:20.516 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T18:09:20.517 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T18:09:20.536 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T18:09:20.540 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T18:09:20.552 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:09:20.706 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T18:09:20.712 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:20.712 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:20.728 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T18:09:20.734 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T18:09:20.735 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:09:20.755 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T18:09:20.761 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T18:09:20.761 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:20.778 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-09T18:09:20.783 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T18:09:20.784 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:20.800 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-09T18:09:20.804 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T18:09:20.804 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:09:20.828 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T18:09:20.833 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T18:09:20.834 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:09:20.886 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-09T18:09:20.892 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:20.893 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:21.405 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:09:21.408 DEBUG:teuthology.orchestra.run.vm09:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T18:09:21.484 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:09:21.659 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:09:21.660 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:09:21.734 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T18:09:21.738 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T18:09:21.739 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:21.765 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T18:09:21.770 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:21.771 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:21.786 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T18:09:21.792 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T18:09:21.792 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:09:21.817 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:09:21.817 INFO:teuthology.orchestra.run.vm09.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:09:21.817 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:09:21.817 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:09:21.819 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T18:09:21.825 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T18:09:21.826 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:09:21.835 INFO:teuthology.orchestra.run.vm09.stdout:The following NEW packages will be installed: 2026-03-09T18:09:21.835 INFO:teuthology.orchestra.run.vm09.stdout: python3-jmespath python3-xmltodict 2026-03-09T18:09:21.847 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T18:09:21.853 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T18:09:21.854 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:09:21.893 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package pkg-config. 2026-03-09T18:09:21.899 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T18:09:21.900 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:09:21.918 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T18:09:21.925 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T18:09:21.925 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:21.967 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T18:09:21.973 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T18:09:21.974 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T18:09:21.989 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T18:09:21.995 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T18:09:21.996 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T18:09:22.018 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T18:09:22.025 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T18:09:22.025 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T18:09:22.045 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T18:09:22.050 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T18:09:22.051 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T18:09:22.068 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:09:22.068 INFO:teuthology.orchestra.run.vm09.stdout:Need to get 34.3 kB of archives. 2026-03-09T18:09:22.068 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T18:09:22.068 INFO:teuthology.orchestra.run.vm09.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T18:09:22.072 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-09T18:09:22.078 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T18:09:22.079 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T18:09:22.103 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T18:09:22.110 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T18:09:22.110 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T18:09:22.146 INFO:teuthology.orchestra.run.vm09.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T18:09:22.169 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T18:09:22.175 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T18:09:22.176 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:09:22.194 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-09T18:09:22.200 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T18:09:22.201 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T18:09:22.219 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T18:09:22.226 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T18:09:22.227 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T18:09:22.255 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T18:09:22.261 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T18:09:22.262 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:09:22.280 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T18:09:22.284 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T18:09:22.285 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:09:22.345 INFO:teuthology.orchestra.run.vm09.stdout:Fetched 34.3 kB in 0s (111 kB/s) 2026-03-09T18:09:22.401 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T18:09:22.421 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-09T18:09:22.427 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:22.428 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:22.434 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T18:09:22.436 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T18:09:22.437 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T18:09:22.453 INFO:teuthology.orchestra.run.vm09.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T18:09:22.459 INFO:teuthology.orchestra.run.vm09.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T18:09:22.459 INFO:teuthology.orchestra.run.vm09.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T18:09:22.484 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T18:09:22.552 INFO:teuthology.orchestra.run.vm09.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T18:09:22.639 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T18:09:22.645 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T18:09:22.646 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:22.664 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-09T18:09:22.670 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T18:09:22.678 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:09:22.721 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:09:22.885 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:22.885 INFO:teuthology.orchestra.run.vm09.stdout:Running kernel seems to be up-to-date. 2026-03-09T18:09:22.885 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:22.885 INFO:teuthology.orchestra.run.vm09.stdout:Services to be restarted: 2026-03-09T18:09:22.891 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart packagekit.service 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout:Service restarts being deferred: 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout: systemctl restart unattended-upgrades.service 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout:No containers need to be restarted. 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout:No user sessions are running outdated binaries. 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:09:22.895 INFO:teuthology.orchestra.run.vm09.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T18:09:22.960 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T18:09:22.960 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T18:09:23.321 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T18:09:23.391 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:09:23.393 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:09:23.457 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T18:09:23.689 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T18:09:23.833 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:09:23.837 DEBUG:teuthology.parallel:result is None 2026-03-09T18:09:24.046 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T18:09:24.052 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T18:09:24.054 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:24.096 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-09T18:09:24.104 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:09:24.177 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:09:24.240 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:24.242 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:09:24.309 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T18:09:24.375 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:09:24.378 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T18:09:24.470 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:09:24.587 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T18:09:24.653 INFO:teuthology.orchestra.run.vm03.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:09:24.661 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:09:24.730 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:09:24.796 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:24.864 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:09:24.866 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T18:09:24.869 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:09:24.871 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:09:24.873 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:09:24.876 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:09:24.880 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T18:09:24.883 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T18:09:24.885 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:09:24.888 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T18:09:25.016 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T18:09:25.088 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:09:25.158 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:09:25.238 INFO:teuthology.orchestra.run.vm03.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T18:09:25.241 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T18:09:25.532 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:09:25.601 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:09:25.604 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:09:25.606 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:09:25.704 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:09:25.843 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:09:26.157 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:09:26.245 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:09:26.370 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:09:26.435 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:09:26.437 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:26.525 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:09:27.126 INFO:teuthology.orchestra.run.vm03.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:09:27.149 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:27.155 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T18:09:27.228 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:09:27.230 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:09:27.233 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T18:09:27.307 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T18:09:27.375 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:27.378 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T18:09:27.452 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:09:27.518 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T18:09:27.587 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T18:09:27.656 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:09:27.722 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T18:09:27.795 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:09:27.797 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T18:09:27.875 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:09:27.877 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:09:27.948 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:09:28.041 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:09:28.133 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:09:28.203 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:09:28.205 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:09:28.207 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:28.210 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T18:09:28.349 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:09:28.419 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T18:09:28.421 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T18:09:28.491 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:09:28.493 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T18:09:28.570 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:09:28.572 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T18:09:28.648 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:09:28.780 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T18:09:28.863 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:09:28.981 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:09:28.984 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:28.986 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:28.988 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:09:29.586 INFO:teuthology.orchestra.run.vm03.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:09:29.593 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:29.595 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:29.597 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:29.599 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:29.602 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:29.660 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T18:09:29.661 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T18:09:30.017 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.019 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.021 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.024 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.026 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.029 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.031 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.033 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.066 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-09T18:09:30.105 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-09T18:09:30.113 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-09T18:09:30.117 INFO:teuthology.orchestra.run.vm03.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T18:09:30.182 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T18:09:30.393 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T18:09:30.758 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:30.760 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:31.020 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:09:31.020 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T18:09:31.380 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:31.466 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T18:09:31.803 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:31.866 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T18:09:31.866 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T18:09:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:32.279 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T18:09:32.279 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T18:09:32.660 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:32.742 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T18:09:32.742 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T18:09:33.098 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.101 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.114 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.179 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T18:09:33.179 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T18:09:33.567 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.583 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.586 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.599 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:09:33.721 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T18:09:33.729 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:09:33.745 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:09:33.827 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T18:09:34.165 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:34.165 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-09T18:09:34.165 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:34.165 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-09T18:09:34.171 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-09T18:09:34.174 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:34.175 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T18:09:35.199 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:09:35.202 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T18:09:35.278 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:09:35.484 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:09:35.484 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:09:35.698 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:09:35.698 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:09:35.698 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:09:35.698 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:09:35.714 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-09T18:09:35.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath python3-xmltodict 2026-03-09T18:09:36.883 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:09:36.883 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 34.3 kB of archives. 2026-03-09T18:09:36.883 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T18:09:36.883 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T18:09:37.083 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T18:09:37.284 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 34.3 kB in 1s (25.0 kB/s) 2026-03-09T18:09:37.302 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T18:09:37.332 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T18:09:37.335 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T18:09:37.336 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T18:09:37.353 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T18:09:37.359 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T18:09:37.359 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T18:09:37.386 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T18:09:37.454 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T18:09:37.788 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:37.788 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-09T18:09:37.788 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:37.788 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-09T18:09:37.793 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-09T18:09:37.796 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:37.796 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-09T18:09:37.796 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T18:09:37.796 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-09T18:09:37.796 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:37.796 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-09T18:09:37.797 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:37.797 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-09T18:09:37.797 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:09:37.797 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T18:09:38.597 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:09:38.600 DEBUG:teuthology.parallel:result is None 2026-03-09T18:09:38.601 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:09:39.217 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-09T18:09:39.225 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:09:39.225 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:09:39.225 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T18:09:39.226 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:09:39.828 DEBUG:teuthology.orchestra.run.vm09:> dpkg-query -W -f '${Version}' ceph 2026-03-09T18:09:39.837 INFO:teuthology.orchestra.run.vm09.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:09:39.837 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T18:09:39.837 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T18:09:39.840 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T18:09:39.840 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:09:39.840 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T18:09:39.848 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:09:39.848 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T18:09:39.888 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T18:09:39.888 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:09:39.888 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T18:09:39.897 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T18:09:39.949 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:09:39.950 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T18:09:39.957 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T18:09:40.006 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T18:09:40.006 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:09:40.006 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T18:09:40.014 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T18:09:40.065 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:09:40.065 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T18:09:40.074 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T18:09:40.125 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T18:09:40.125 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:09:40.125 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T18:09:40.133 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T18:09:40.180 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:09:40.181 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T18:09:40.188 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T18:09:40.236 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T18:09:40.279 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'use-ca-signed-key': True} 2026-03-09T18:09:40.280 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:09:40.280 INFO:tasks.cephadm:Cluster fsid is 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:09:40.280 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T18:09:40.280 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.103', 'mon.b': '192.168.123.109'} 2026-03-09T18:09:40.280 INFO:tasks.cephadm:First mon is mon.a on vm03 2026-03-09T18:09:40.280 INFO:tasks.cephadm:First mgr is a 2026-03-09T18:09:40.280 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T18:09:40.280 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-09T18:09:40.287 DEBUG:teuthology.orchestra.run.vm09:> sudo hostname $(hostname -s) 2026-03-09T18:09:40.294 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T18:09:40.294 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:09:40.944 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T18:09:41.625 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:09:41.626 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T18:09:41.626 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T18:09:41.626 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T18:09:42.946 INFO:teuthology.orchestra.run.vm03.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 18:09 /home/ubuntu/cephtest/cephadm 2026-03-09T18:09:42.947 DEBUG:teuthology.orchestra.run.vm09:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T18:09:44.246 INFO:teuthology.orchestra.run.vm09.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 18:09 /home/ubuntu/cephtest/cephadm 2026-03-09T18:09:44.246 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T18:09:44.250 DEBUG:teuthology.orchestra.run.vm09:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T18:09:44.257 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T18:09:44.258 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T18:09:44.294 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T18:09:44.386 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:09:44.387 INFO:teuthology.orchestra.run.vm09.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-09T18:10:20.119 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-09T18:10:49.221 INFO:teuthology.orchestra.run.vm09.stdout:{ 2026-03-09T18:10:49.221 INFO:teuthology.orchestra.run.vm09.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T18:10:49.221 INFO:teuthology.orchestra.run.vm09.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T18:10:49.222 INFO:teuthology.orchestra.run.vm09.stdout: "repo_digests": [ 2026-03-09T18:10:49.222 INFO:teuthology.orchestra.run.vm09.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T18:10:49.222 INFO:teuthology.orchestra.run.vm09.stdout: ] 2026-03-09T18:10:49.222 INFO:teuthology.orchestra.run.vm09.stdout:} 2026-03-09T18:10:49.235 DEBUG:teuthology.orchestra.run.vm03:> sudo ssh-keygen -t rsa -f /root/ca-key -N '' 2026-03-09T18:10:49.389 INFO:teuthology.orchestra.run.vm03.stdout:Generating public/private rsa key pair. 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:Your identification has been saved in /root/ca-key 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:Your public key has been saved in /root/ca-key.pub 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:The key fingerprint is: 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:SHA256:XtEJcEQrqrXLzlfhkzvuELW7y8WbOA60wdTi1Sul5qQ root@vm03 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:The key's randomart image is: 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:+---[RSA 3072]----+ 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| .+= | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| o = . | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| + * = | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| = =.= . | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| o S.Bo. | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| o + X=+ | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| . . E.ooo | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| o ..+++ o | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:| .=. +O+o | 2026-03-09T18:10:49.390 INFO:teuthology.orchestra.run.vm03.stdout:+----[SHA256]-----+ 2026-03-09T18:10:49.390 DEBUG:teuthology.orchestra.run.vm03:> sudo cat /root/ca-key.pub 2026-03-09T18:10:49.397 INFO:teuthology.orchestra.run.vm03.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:49.398 DEBUG:teuthology.orchestra.run.vm03:> sudo echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:49.398 DEBUG:teuthology.orchestra.run.vm03:> ' | sudo tee -a /etc/ssh/ca-key.pub 2026-03-09T18:10:49.449 INFO:teuthology.orchestra.run.vm03.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:49.449 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:10:49.450 DEBUG:teuthology.orchestra.run.vm03:> sudo echo 'TrustedUserCAKeys /etc/ssh/ca-key.pub' | sudo tee -a /etc/ssh/sshd_config && sudo systemctl restart sshd 2026-03-09T18:10:49.501 INFO:teuthology.orchestra.run.vm03.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-09T18:10:49.523 DEBUG:teuthology.orchestra.run.vm09:> sudo echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:49.523 DEBUG:teuthology.orchestra.run.vm09:> ' | sudo tee -a /etc/ssh/ca-key.pub 2026-03-09T18:10:49.531 INFO:teuthology.orchestra.run.vm09.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:49.531 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:10:49.532 DEBUG:teuthology.orchestra.run.vm09:> sudo echo 'TrustedUserCAKeys /etc/ssh/ca-key.pub' | sudo tee -a /etc/ssh/sshd_config && sudo systemctl restart sshd 2026-03-09T18:10:49.581 INFO:teuthology.orchestra.run.vm09.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-09T18:10:49.604 DEBUG:teuthology.orchestra.run.vm03:> sudo ssh-keygen -t rsa -f /root/cephadm-ssh-key -N '' && sudo ssh-keygen -s /root/ca-key -I user_root -n root -V +52w /root/cephadm-ssh-key 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:Generating public/private rsa key pair. 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:Your identification has been saved in /root/cephadm-ssh-key 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:Your public key has been saved in /root/cephadm-ssh-key.pub 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:The key fingerprint is: 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:SHA256:+lI+Axqf0T0KO0S2U64GnMZ88n61D5mbI0GntpdSoa0 root@vm03 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:The key's randomart image is: 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:+---[RSA 3072]----+ 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| o o o | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| + + *S* . | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| X O.O *o | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| . X.% B++ | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| . X.E =+ | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:| o.+.*oo. | 2026-03-09T18:10:50.479 INFO:teuthology.orchestra.run.vm03.stdout:+----[SHA256]-----+ 2026-03-09T18:10:50.487 INFO:teuthology.orchestra.run.vm03.stderr:Signed user key /root/cephadm-ssh-key-cert.pub: id "user_root" serial 0 for root valid from 2026-03-09T18:09:00 to 2027-03-08T18:10:50 2026-03-09T18:10:50.488 DEBUG:teuthology.orchestra.run.vm03:> sudo cat /etc/ssh/ca-key.pub 2026-03-09T18:10:50.494 INFO:teuthology.orchestra.run.vm03.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:50.494 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:10:50.495 DEBUG:teuthology.orchestra.run.vm03:> sudo cat /etc/ssh/sshd_config | grep TrustedUserCAKeys 2026-03-09T18:10:50.545 INFO:teuthology.orchestra.run.vm03.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-09T18:10:50.545 DEBUG:teuthology.orchestra.run.vm09:> sudo cat /etc/ssh/ca-key.pub 2026-03-09T18:10:50.552 INFO:teuthology.orchestra.run.vm09.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWRwEbarCwC3VTg/nW76NbOqvjfOBT3GD2HM36Ui58/dvkL8+vWk4iczead/V+ZJH9TQPzsExeUbyFejIKT8Ho+HyLruhiEMvS3QPYtJznV2nGT/pmPiHVaPR45CvRJUf39WOeC9BNQOH0yq3OzEv2rCjsvY3W5qI0lYTT+xwJTGumxyTfyNJJcIct0E9v1Se4zAoBZ6YEcTrYWy/+y0MCDv98suLIHXSVFjoEfwlwlblMV8eF5aBQndxanBO3n634xjZmeuhSjX3S88dLwmifZqQCLYyvupmqgOsR4+p58F3OJ/+n8i8R0e6ZslPTAtqWzZ2YT0IfxgPV8/tdBA9q91wjAeWDv7QH0dR0zqC5IdIPn6RHJiEj8tSJ8eDJpwquVMgBU//3mC2rH+TX6aLuDXUAlfng3cT6OmroQUZ3DqoMTUDrCK5+2vib9r0uBzG6XrGo1GHVFJkLV6Bm6M/m7Oc4J4S74YTR9PZxGTxrpx527hOpA1leE8TNHQSXk7E= root@vm03 2026-03-09T18:10:50.552 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:10:50.553 DEBUG:teuthology.orchestra.run.vm09:> sudo cat /etc/ssh/sshd_config | grep TrustedUserCAKeys 2026-03-09T18:10:50.600 INFO:teuthology.orchestra.run.vm09.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-09T18:10:50.601 DEBUG:teuthology.orchestra.run.vm03:> sudo ls /root/ 2026-03-09T18:10:50.607 INFO:teuthology.orchestra.run.vm03.stdout:ca-key 2026-03-09T18:10:50.607 INFO:teuthology.orchestra.run.vm03.stdout:ca-key.pub 2026-03-09T18:10:50.607 INFO:teuthology.orchestra.run.vm03.stdout:cephadm-ssh-key 2026-03-09T18:10:50.607 INFO:teuthology.orchestra.run.vm03.stdout:cephadm-ssh-key-cert.pub 2026-03-09T18:10:50.608 INFO:teuthology.orchestra.run.vm03.stdout:cephadm-ssh-key.pub 2026-03-09T18:10:50.608 INFO:teuthology.orchestra.run.vm03.stdout:snap 2026-03-09T18:10:50.608 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-09T18:10:50.657 DEBUG:teuthology.orchestra.run.vm09:> sudo mkdir -p /etc/ceph 2026-03-09T18:10:50.664 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-09T18:10:50.708 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 777 /etc/ceph 2026-03-09T18:10:50.716 INFO:tasks.cephadm:Writing seed config... 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T18:10:50.717 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T18:10:50.717 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:10:50.717 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T18:10:50.752 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 24200844-1be3-11f1-b4ce-2b35a0bfc236 mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T18:10:50.753 DEBUG:teuthology.orchestra.run.vm03:mon.a> sudo journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a.service 2026-03-09T18:10:50.795 DEBUG:teuthology.orchestra.run.vm03:mgr.a> sudo journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a.service 2026-03-09T18:10:50.838 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T18:10:50.839 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --ssh-private-key /root/cephadm-ssh-key --ssh-signed-cert /root/cephadm-ssh-key-cert.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.103 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:10:50.970 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-09T18:10:50.971 INFO:teuthology.orchestra.run.vm03.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '24200844-1be3-11f1-b4ce-2b35a0bfc236', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--ssh-private-key', '/root/cephadm-ssh-key', '--ssh-signed-cert', '/root/cephadm-ssh-key-cert.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.103', '--skip-admin-label'] 2026-03-09T18:10:50.971 INFO:teuthology.orchestra.run.vm03.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T18:10:50.971 INFO:teuthology.orchestra.run.vm03.stdout:Verifying podman|docker is present... 2026-03-09T18:10:50.971 INFO:teuthology.orchestra.run.vm03.stdout:Verifying lvm2 is present... 2026-03-09T18:10:50.971 INFO:teuthology.orchestra.run.vm03.stdout:Verifying time synchronization is in place... 2026-03-09T18:10:50.974 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T18:10:50.974 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T18:10:50.976 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T18:10:50.976 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:50.978 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T18:10:50.978 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T18:10:50.980 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T18:10:50.980 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:50.982 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T18:10:50.982 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout masked 2026-03-09T18:10:50.984 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T18:10:50.984 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:50.987 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T18:10:50.987 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T18:10:50.989 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T18:10:50.989 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:50.992 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T18:10:50.994 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T18:10:50.994 INFO:teuthology.orchestra.run.vm03.stdout:Unit ntp.service is enabled and running 2026-03-09T18:10:50.994 INFO:teuthology.orchestra.run.vm03.stdout:Repeating the final host check... 2026-03-09T18:10:50.994 INFO:teuthology.orchestra.run.vm03.stdout:docker (/usr/bin/docker) is present 2026-03-09T18:10:50.994 INFO:teuthology.orchestra.run.vm03.stdout:systemctl is present 2026-03-09T18:10:50.994 INFO:teuthology.orchestra.run.vm03.stdout:lvcreate is present 2026-03-09T18:10:50.996 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T18:10:50.996 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T18:10:50.998 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T18:10:50.998 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:51.000 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T18:10:51.000 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T18:10:51.002 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T18:10:51.002 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:51.005 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T18:10:51.005 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout masked 2026-03-09T18:10:51.007 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T18:10:51.007 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:51.009 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T18:10:51.009 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T18:10:51.011 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T18:10:51.012 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T18:10:51.014 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T18:10:51.016 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T18:10:51.016 INFO:teuthology.orchestra.run.vm03.stdout:Unit ntp.service is enabled and running 2026-03-09T18:10:51.016 INFO:teuthology.orchestra.run.vm03.stdout:Host looks OK 2026-03-09T18:10:51.016 INFO:teuthology.orchestra.run.vm03.stdout:Cluster fsid: 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:51.016 INFO:teuthology.orchestra.run.vm03.stdout:Acquiring lock 139915035379936 on /run/cephadm/24200844-1be3-11f1-b4ce-2b35a0bfc236.lock 2026-03-09T18:10:51.016 INFO:teuthology.orchestra.run.vm03.stdout:Lock 139915035379936 acquired on /run/cephadm/24200844-1be3-11f1-b4ce-2b35a0bfc236.lock 2026-03-09T18:10:51.017 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 3300 ... 2026-03-09T18:10:51.017 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 6789 ... 2026-03-09T18:10:51.017 INFO:teuthology.orchestra.run.vm03.stdout:Base mon IP(s) is [192.168.123.103:3300, 192.168.123.103:6789], mon addrv is [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T18:10:51.019 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.103 metric 100 2026-03-09T18:10:51.019 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T18:10:51.019 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.103 metric 100 2026-03-09T18:10:51.019 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.103 metric 100 2026-03-09T18:10:51.020 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T18:10:51.020 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T18:10:51.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T18:10:51.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T18:10:51.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T18:10:51.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T18:10:51.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:3/64 scope link 2026-03-09T18:10:51.021 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.1/32` 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.1/32` 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T18:10:51.022 INFO:teuthology.orchestra.run.vm03.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T18:10:52.049 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T18:10:52.049 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T18:10:52.049 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:10:52.049 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T18:10:52.214 INFO:teuthology.orchestra.run.vm03.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T18:10:52.214 INFO:teuthology.orchestra.run.vm03.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T18:10:52.214 INFO:teuthology.orchestra.run.vm03.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T18:10:52.303 INFO:teuthology.orchestra.run.vm03.stdout:stat: stdout 167 167 2026-03-09T18:10:52.303 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial keys... 2026-03-09T18:10:52.404 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAsDa9p7v+FFhAATWCTLtjOoxKsC+cCSxkx+Q== 2026-03-09T18:10:52.510 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAsDa9p6TnwHBAAkoysz1Cfuds6bwfTS97+ww== 2026-03-09T18:10:52.610 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAsDa9pieTVIhAAI3MhS0pnucdXl5747dHNdA== 2026-03-09T18:10:52.610 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial monmap... 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:monmaptool for a [v2:192.168.123.103:3300,v1:192.168.123.103:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:setting min_mon_release = quincy 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: set fsid to 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:10:52.707 INFO:teuthology.orchestra.run.vm03.stdout:Creating mon... 2026-03-09T18:10:52.825 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.776+0000 7fac6a232d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.776+0000 7fac6a232d80 1 imported monmap: 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T18:10:52.684992+0000 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.776+0000 7fac6a232d80 0 /usr/bin/ceph-mon: set fsid to 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Git sha 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: DB SUMMARY 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: DB Session ID: DEGYXN7YN04FRQAM04XM 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.env: 0x56200c1b7dc0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.info_log: 0x56203bcc8da0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.db_log_dir: 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.wal_dir: 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.write_buffer_manager: 0x56203bcbf5e0 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:10:52.826 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.row_cache: None 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.wal_filter: None 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:10:52.827 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Compression algorithms supported: 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kZSTD supported: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.780+0000 7fac6a232d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.merge_operator: 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56203bcbb520) 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T18:10:52.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x56203bce1350 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T18:10:52.832 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.num_levels: 7 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:10:52.833 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9580857b-1687-401b-8f48-80458ccfcf6c 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.784+0000 7fac6a232d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac6a232d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56203bce2e00 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac6a232d80 4 rocksdb: DB pointer 0x56203bdc6000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac619bc640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac619bc640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T18:10:52.834 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x56203bce1350#8 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7e-06 secs_since: 0 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac6a232d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac6a232d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T18:10:52.788+0000 7fac6a232d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T18:10:52.835 INFO:teuthology.orchestra.run.vm03.stdout:create mon.a on 2026-03-09T18:10:53.001 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-09T18:10:53.178 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T18:10:53.374 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236.target → /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236.target. 2026-03-09T18:10:53.374 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236.target → /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236.target. 2026-03-09T18:10:53.564 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a 2026-03-09T18:10:53.564 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a.service: Unit ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a.service not loaded. 2026-03-09T18:10:53.738 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236.target.wants/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a.service → /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service. 2026-03-09T18:10:53.748 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T18:10:53.748 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T18:10:53.748 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon to start... 2026-03-09T18:10:53.748 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon... 2026-03-09T18:10:54.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:53 vm03 bash[20277]: cluster 2026-03-09T18:10:53.885295+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout id: 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout services: 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.246362s) 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout data: 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:mon is available 2026-03-09T18:10:54.260 INFO:teuthology.orchestra.run.vm03.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T18:10:54.501 INFO:teuthology.orchestra.run.vm03.stdout:Generating new minimal ceph.conf... 2026-03-09T18:10:54.719 INFO:teuthology.orchestra.run.vm03.stdout:Restarting the monitor... 2026-03-09T18:10:54.837 INFO:teuthology.orchestra.run.vm03.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-09T18:10:54.972 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 systemd[1]: Stopping Ceph mon.a for 24200844-1be3-11f1-b4ce-2b35a0bfc236... 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20277]: debug 2026-03-09T18:10:54.764+0000 7f48394b1640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20277]: debug 2026-03-09T18:10:54.764+0000 7f48394b1640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20674]: ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236-mon-a 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 systemd[1]: ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a.service: Deactivated successfully. 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 systemd[1]: Stopped Ceph mon.a for 24200844-1be3-11f1-b4ce-2b35a0bfc236. 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 systemd[1]: Started Ceph mon.a for 24200844-1be3-11f1-b4ce-2b35a0bfc236. 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 0 load: jerasure load: lrc 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Git sha 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: DB SUMMARY 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: DB Session ID: 6WT2HFEDCRYU4AUZGQ8Q 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75507 ; 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.env: 0x562df5820dc0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.info_log: 0x562e2980f880 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.db_log_dir: 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.wal_dir: 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.write_buffer_manager: 0x562e29813900 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T18:10:54.973 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T18:10:54.974 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T18:10:54.974 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T18:10:54.974 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T18:10:55.088 INFO:teuthology.orchestra.run.vm03.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T18:10:55.089 INFO:teuthology.orchestra.run.vm03.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:10:55.089 INFO:teuthology.orchestra.run.vm03.stdout:Creating mgr... 2026-03-09T18:10:55.089 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T18:10:55.089 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.row_cache: None 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.wal_filter: None 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T18:10:55.224 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Compression algorithms supported: 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kZSTD supported: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.merge_operator: 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562e2980e480) 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: cache_index_and_filter_blocks: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: pin_top_level_index_and_filter: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: index_type: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: data_block_index_type: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: index_shortening: 1 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: checksum: 4 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: no_block_cache: 0 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_cache: 0x562e29835350 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_cache_name: BinnedLRUCache 2026-03-09T18:10:55.225 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_cache_options: 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: capacity : 536870912 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: num_shard_bits : 4 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: strict_capacity_limit : 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: high_pri_pool_ratio: 0.000 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_cache_compressed: (nil) 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: persistent_cache: (nil) 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_size: 4096 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_size_deviation: 10 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_restart_interval: 16 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: index_block_restart_interval: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: metadata_block_size: 4096 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: partition_filters: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: use_delta_encoding: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: filter_policy: bloomfilter 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: whole_key_filtering: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: verify_compression: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: read_amp_bytes_per_bit: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: format_version: 5 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: enable_index_compression: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: block_align: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: max_auto_readahead_size: 262144 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: prepopulate_block_cache: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: initial_auto_readahead_size: 8192 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: num_file_reads_for_auto_readahead: 2 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.num_levels: 7 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T18:10:55.226 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9580857b-1687-401b-8f48-80458ccfcf6c 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773079854970057, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.964+0000 7f91c6751d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.972+0000 7f91c6751d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773079854977307, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70867, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65346, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773079854, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9580857b-1687-401b-8f48-80458ccfcf6c", "db_session_id": "6WT2HFEDCRYU4AUZGQ8Q", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.972+0000 7f91c6751d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773079854977553, "job": 1, "event": "recovery_finished"} 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.972+0000 7f91c6751d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562e29836e00 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 4 rocksdb: DB pointer 0x562e29942000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 1 mon.a@-1(???) e1 preinit fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 mon.a@-1(???).mds e1 new map 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 mon.a@-1(???).mds e1 print_map 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: e1 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: btime 2026-03-09T18:10:53:889565+0000 2026-03-09T18:10:55.227 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: legacy client fscid: -1 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: No filesystems configured 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.976+0000 7f91c6751d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:54 vm03 bash[20762]: debug 2026-03-09T18:10:54.980+0000 7f91c6751d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985784+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985784+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985813+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985813+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985817+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985817+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985819+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T18:10:52.684992+0000 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985819+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T18:10:52.684992+0000 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985826+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985826+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985830+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985830+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985833+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985833+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985835+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.985835+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.986043+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.986043+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.986056+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.986056+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.986474+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T18:10:55.228 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[20762]: cluster 2026-03-09T18:10:54.986474+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T18:10:55.275 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a 2026-03-09T18:10:55.275 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a.service: Unit ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a.service not loaded. 2026-03-09T18:10:55.434 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236.target.wants/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a.service → /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service. 2026-03-09T18:10:55.442 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T18:10:55.442 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T18:10:55.442 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T18:10:55.442 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T18:10:55.442 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr to start... 2026-03-09T18:10:55.442 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr... 2026-03-09T18:10:55.510 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:10:55.510 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:55 vm03 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "24200844-1be3-11f1-b4ce-2b35a0bfc236", 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:55.689 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T18:10:53:889565+0000", 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T18:10:53.890197+0000", 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:10:55.690 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (1/15)... 2026-03-09T18:10:55.814 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[21034]: debug 2026-03-09T18:10:55.696+0000 7f38a368b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:10:56.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[20762]: audit 2026-03-09T18:10:55.051406+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/1530490513' entity='client.admin' 2026-03-09T18:10:56.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[20762]: audit 2026-03-09T18:10:55.051406+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/1530490513' entity='client.admin' 2026-03-09T18:10:56.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[20762]: audit 2026-03-09T18:10:55.641701+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/2194566408' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:10:56.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[20762]: audit 2026-03-09T18:10:55.641701+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/2194566408' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:10:56.073 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:55 vm03 bash[21034]: debug 2026-03-09T18:10:55.808+0000 7f38a368b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:10:56.518 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.088+0000 7f38a368b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:10:56.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.516+0000 7f38a368b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:10:56.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.592+0000 7f38a368b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:10:56.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:10:56.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:10:56.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: from numpy import show_config as show_numpy_config 2026-03-09T18:10:56.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.712+0000 7f38a368b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:10:57.322 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.840+0000 7f38a368b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:10:57.322 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.876+0000 7f38a368b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:10:57.322 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.912+0000 7f38a368b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:10:57.322 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:56 vm03 bash[21034]: debug 2026-03-09T18:10:56.952+0000 7f38a368b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:10:57.322 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.004+0000 7f38a368b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:10:57.673 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.412+0000 7f38a368b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:10:57.673 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.448+0000 7f38a368b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:10:57.673 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.484+0000 7f38a368b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:10:57.673 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.628+0000 7f38a368b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:10:57.923 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.668+0000 7f38a368b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:10:57.923 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.712+0000 7f38a368b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:10:57.923 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[21034]: debug 2026-03-09T18:10:57.852+0000 7f38a368b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "24200844-1be3-11f1-b4ce-2b35a0bfc236", 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:10:57.951 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T18:10:53:889565+0000", 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T18:10:53.890197+0000", 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:10:57.952 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (2/15)... 2026-03-09T18:10:58.196 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[21034]: debug 2026-03-09T18:10:58.020+0000 7f38a368b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:10:58.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[20762]: audit 2026-03-09T18:10:57.907734+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/1835780185' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:10:58.196 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:57 vm03 bash[20762]: audit 2026-03-09T18:10:57.907734+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/1835780185' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:10:58.572 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[21034]: debug 2026-03-09T18:10:58.192+0000 7f38a368b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:10:58.572 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[21034]: debug 2026-03-09T18:10:58.228+0000 7f38a368b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:10:58.572 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[21034]: debug 2026-03-09T18:10:58.268+0000 7f38a368b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:10:58.572 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[21034]: debug 2026-03-09T18:10:58.416+0000 7f38a368b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:10:58.960 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[21034]: debug 2026-03-09T18:10:58.648+0000 7f38a368b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: cluster 2026-03-09T18:10:58.652251+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: cluster 2026-03-09T18:10:58.652251+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: cluster 2026-03-09T18:10:58.655695+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00352391s) 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: cluster 2026-03-09T18:10:58.655695+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00352391s) 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657595+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657595+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657670+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657670+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657737+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657737+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657805+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.657805+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.659633+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.659633+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: cluster 2026-03-09T18:10:58.663779+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: cluster 2026-03-09T18:10:58.663779+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.674300+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.674300+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.674657+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.674657+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.676428+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.676428+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.679168+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.679168+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.680246+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T18:10:59.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:10:58 vm03 bash[20762]: audit 2026-03-09T18:10:58.680246+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.103:0/159680380' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "24200844-1be3-11f1-b4ce-2b35a0bfc236", 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T18:11:00.253 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T18:10:53:889565+0000", 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T18:10:53.890197+0000", 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:11:00.254 INFO:teuthology.orchestra.run.vm03.stdout:mgr is available 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T18:11:00.502 INFO:teuthology.orchestra.run.vm03.stdout:Enabling cephadm module... 2026-03-09T18:11:00.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: cluster 2026-03-09T18:10:59.659959+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.00779s) 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: cluster 2026-03-09T18:10:59.659959+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.00779s) 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: audit 2026-03-09T18:11:00.218317+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.103:0/870094414' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: audit 2026-03-09T18:11:00.218317+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.103:0/870094414' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: audit 2026-03-09T18:11:00.458996+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.103:0/2979796563' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: audit 2026-03-09T18:11:00.458996+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.103:0/2979796563' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: audit 2026-03-09T18:11:00.461470+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.103:0/2979796563' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T18:11:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:00 vm03 bash[20762]: audit 2026-03-09T18:11:00.461470+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.103:0/2979796563' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[21034]: ignoring --setuser ceph since I am not root 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[21034]: ignoring --setgroup ceph since I am not root 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[21034]: debug 2026-03-09T18:11:01.596+0000 7f64f281e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[21034]: debug 2026-03-09T18:11:01.636+0000 7f64f281e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[20762]: audit 2026-03-09T18:11:00.736745+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.103:0/2411689229' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[20762]: audit 2026-03-09T18:11:00.736745+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.103:0/2411689229' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[20762]: audit 2026-03-09T18:11:01.462734+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/2411689229' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[20762]: audit 2026-03-09T18:11:01.462734+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/2411689229' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[20762]: cluster 2026-03-09T18:11:01.465300+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T18:11:01.764 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[20762]: cluster 2026-03-09T18:11:01.465300+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T18:11:01.853 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 4... 2026-03-09T18:11:02.065 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:01 vm03 bash[21034]: debug 2026-03-09T18:11:01.760+0000 7f64f281e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:11:02.322 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.060+0000 7f64f281e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:11:02.792 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.472+0000 7f64f281e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.548+0000 7f64f281e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: from numpy import show_config as show_numpy_config 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.660+0000 7f64f281e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[20762]: audit 2026-03-09T18:11:01.794052+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.103:0/2765299237' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:11:02.793 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[20762]: audit 2026-03-09T18:11:01.794052+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.103:0/2765299237' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:11:03.072 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.788+0000 7f64f281e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:11:03.072 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.824+0000 7f64f281e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:11:03.072 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.860+0000 7f64f281e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:11:03.072 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.900+0000 7f64f281e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:11:03.072 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:02 vm03 bash[21034]: debug 2026-03-09T18:11:02.948+0000 7f64f281e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:11:03.617 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.356+0000 7f64f281e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:11:03.617 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.392+0000 7f64f281e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:11:03.617 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.428+0000 7f64f281e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:11:03.617 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.572+0000 7f64f281e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:11:03.617 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.612+0000 7f64f281e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:11:03.911 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.652+0000 7f64f281e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:11:03.911 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.756+0000 7f64f281e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:11:03.911 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:03 vm03 bash[21034]: debug 2026-03-09T18:11:03.908+0000 7f64f281e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:11:04.288 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[21034]: debug 2026-03-09T18:11:04.068+0000 7f64f281e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:11:04.288 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[21034]: debug 2026-03-09T18:11:04.104+0000 7f64f281e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:11:04.288 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[21034]: debug 2026-03-09T18:11:04.144+0000 7f64f281e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:11:04.288 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[21034]: debug 2026-03-09T18:11:04.284+0000 7f64f281e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:11:04.558 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[21034]: debug 2026-03-09T18:11:04.500+0000 7f64f281e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:11:04.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.504292+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon a restarted 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.504292+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon a restarted 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.504530+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon a 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.504530+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon a 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.509228+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.509228+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.509368+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00491265s) 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.509368+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00491265s) 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.510571+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.510571+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.510807+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.510807+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.512577+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.512577+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.512849+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.512849+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.513055+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.513055+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.518647+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon a is now available 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: cluster 2026-03-09T18:11:04.518647+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon a is now available 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.526662+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.526662+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.530666+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.530666+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.540478+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.540478+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.544499+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.544499+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.551434+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.551434+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.552785+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:04.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:04 vm03 bash[20762]: audit 2026-03-09T18:11:04.552785+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:05.553 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:11:05.553 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-09T18:11:05.553 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T18:11:05.553 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:11:05.553 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 4 is available 2026-03-09T18:11:05.553 INFO:teuthology.orchestra.run.vm03.stdout:Setting orchestrator backend to cephadm... 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cephadm 2026-03-09T18:11:04.524367+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cephadm 2026-03-09T18:11:04.524367+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:04.894707+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:04.894707+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:04.897617+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:04.897617+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.388148+0000 mgr.a (mgr.14118) 2 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Bus STARTING 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.388148+0000 mgr.a (mgr.14118) 2 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Bus STARTING 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.489572+0000 mgr.a (mgr.14118) 3 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.489572+0000 mgr.a (mgr.14118) 3 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cluster 2026-03-09T18:11:05.511883+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e6: a(active, since 1.00743s) 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: cluster 2026-03-09T18:11:05.511883+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e6: a(active, since 1.00743s) 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:05.600558+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:05.600558+0000 mon.a (mon.0) 55 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:05.777780+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:05.777780+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:05.783266+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:06.058 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:05 vm03 bash[20762]: audit 2026-03-09T18:11:05.783266+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:06.085 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T18:11:06.086 INFO:teuthology.orchestra.run.vm03.stdout:Using provided ssh private key and signed cert ... 2026-03-09T18:11:06.593 INFO:teuthology.orchestra.run.vm03.stdout:Adding host vm03... 2026-03-09T18:11:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:05.513674+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:11:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:05.513674+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:11:07.292 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:05.517752+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:05.517752+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.599622+0000 mgr.a (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Client ('192.168.123.103', 53706) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.599622+0000 mgr.a (mgr.14118) 6 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Client ('192.168.123.103', 53706) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.600156+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.600156+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.600194+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Bus STARTED 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:05.600194+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [09/Mar/2026:18:11:05] ENGINE Bus STARTED 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:05.774589+0000 mgr.a (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:05.774589+0000 mgr.a (mgr.14118) 9 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.048869+0000 mgr.a (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.048869+0000 mgr.a (mgr.14118) 10 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.288218+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.288218+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.290831+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.290831+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:06.291577+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Set ssh ssh_identity_key 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:06.291577+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Set ssh ssh_identity_key 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:06.291597+0000 mgr.a (mgr.14118) 13 : cephadm [INF] Set ssh private key 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: cephadm 2026-03-09T18:11:06.291597+0000 mgr.a (mgr.14118) 13 : cephadm [INF] Set ssh private key 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.556722+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:07.293 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:07 vm03 bash[20762]: audit 2026-03-09T18:11:06.556722+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: audit 2026-03-09T18:11:06.554033+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: audit 2026-03-09T18:11:06.554033+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: cephadm 2026-03-09T18:11:06.557551+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Set ssh ssh_identity_cert 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: cephadm 2026-03-09T18:11:06.557551+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Set ssh ssh_identity_cert 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: audit 2026-03-09T18:11:06.807950+0000 mgr.a (mgr.14118) 16 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: audit 2026-03-09T18:11:06.807950+0000 mgr.a (mgr.14118) 16 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: cluster 2026-03-09T18:11:07.301854+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T18:11:08.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:08 vm03 bash[20762]: cluster 2026-03-09T18:11:07.301854+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T18:11:09.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:09 vm03 bash[20762]: cephadm 2026-03-09T18:11:07.703892+0000 mgr.a (mgr.14118) 17 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T18:11:09.303 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:09 vm03 bash[20762]: cephadm 2026-03-09T18:11:07.703892+0000 mgr.a (mgr.14118) 17 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T18:11:09.552 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Added host 'vm03' with addr '192.168.123.103' 2026-03-09T18:11:09.552 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mon service... 2026-03-09T18:11:09.935 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T18:11:09.935 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mgr service... 2026-03-09T18:11:10.186 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:09.485349+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:09.485349+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: cephadm 2026-03-09T18:11:09.485690+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Added host vm03 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: cephadm 2026-03-09T18:11:09.485690+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Added host vm03 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:09.485909+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:09.485909+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:09.899770+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:09.899770+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:10.150019+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:10.150019+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:10.415211+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.103:0/3460986501' entity='client.admin' 2026-03-09T18:11:10.748 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:10 vm03 bash[20762]: audit 2026-03-09T18:11:10.415211+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.103:0/3460986501' entity='client.admin' 2026-03-09T18:11:10.796 INFO:teuthology.orchestra.run.vm03.stdout:Enabling the dashboard module... 2026-03-09T18:11:11.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:09.896331+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:11.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:09.896331+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:11.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: cephadm 2026-03-09T18:11:09.897036+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: cephadm 2026-03-09T18:11:09.897036+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:10.147155+0000 mgr.a (mgr.14118) 21 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:10.147155+0000 mgr.a (mgr.14118) 21 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: cephadm 2026-03-09T18:11:10.147810+0000 mgr.a (mgr.14118) 22 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: cephadm 2026-03-09T18:11:10.147810+0000 mgr.a (mgr.14118) 22 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:10.732501+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.103:0/1129287508' entity='client.admin' 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:10.732501+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.103:0/1129287508' entity='client.admin' 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:10.967686+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:10.967686+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:11.094452+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/1804006870' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:11.094452+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/1804006870' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:11.257273+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:11 vm03 bash[20762]: audit 2026-03-09T18:11:11.257273+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.103:0/1715742682' entity='mgr.a' 2026-03-09T18:11:12.302 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[21034]: ignoring --setuser ceph since I am not root 2026-03-09T18:11:12.302 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[21034]: ignoring --setgroup ceph since I am not root 2026-03-09T18:11:12.302 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[21034]: debug 2026-03-09T18:11:12.144+0000 7f19a649f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:11:12.302 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[21034]: debug 2026-03-09T18:11:12.180+0000 7f19a649f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:11:12.353 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:11:12.353 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-09T18:11:12.354 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T18:11:12.354 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T18:11:12.354 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T18:11:12.354 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:11:12.354 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T18:11:12.354 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 8... 2026-03-09T18:11:12.572 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[21034]: debug 2026-03-09T18:11:12.296+0000 7f19a649f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:11:12.969 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[21034]: debug 2026-03-09T18:11:12.624+0000 7f19a649f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.060+0000 7f19a649f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.140+0000 7f19a649f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[20762]: audit 2026-03-09T18:11:11.969058+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/1804006870' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[20762]: audit 2026-03-09T18:11:11.969058+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/1804006870' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[20762]: cluster 2026-03-09T18:11:11.975908+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[20762]: cluster 2026-03-09T18:11:11.975908+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[20762]: audit 2026-03-09T18:11:12.311121+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.103:0/829281114' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:11:13.258 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:12 vm03 bash[20762]: audit 2026-03-09T18:11:12.311121+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.103:0/829281114' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: from numpy import show_config as show_numpy_config 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.260+0000 7f19a649f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.392+0000 7f19a649f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.428+0000 7f19a649f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:11:13.512 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.464+0000 7f19a649f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:11:13.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.508+0000 7f19a649f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:11:13.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.556+0000 7f19a649f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:11:14.262 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:13 vm03 bash[21034]: debug 2026-03-09T18:11:13.972+0000 7f19a649f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:11:14.262 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.008+0000 7f19a649f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:11:14.263 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.044+0000 7f19a649f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:11:14.263 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.180+0000 7f19a649f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:11:14.263 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.220+0000 7f19a649f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:11:14.263 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.260+0000 7f19a649f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:11:14.518 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.364+0000 7f19a649f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:11:14.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.516+0000 7f19a649f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:11:14.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.676+0000 7f19a649f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:11:14.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.712+0000 7f19a649f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:11:14.822 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.752+0000 7f19a649f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:11:15.168 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:14 vm03 bash[21034]: debug 2026-03-09T18:11:14.896+0000 7f19a649f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:11:15.168 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[21034]: debug 2026-03-09T18:11:15.108+0000 7f19a649f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:11:15.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.115190+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon a restarted 2026-03-09T18:11:15.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.115190+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon a restarted 2026-03-09T18:11:15.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.115406+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon a 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.115406+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon a 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.119433+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.119433+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.119538+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00421847s) 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.119538+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00421847s) 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.122162+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.122162+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.122580+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.122580+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.123060+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.123060+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.123118+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.123118+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.123167+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.123167+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.127531+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon a is now available 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: cluster 2026-03-09T18:11:15.127531+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon a is now available 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.143455+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.143455+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.152658+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:11:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:15 vm03 bash[20762]: audit 2026-03-09T18:11:15.152658+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T18:11:16.183 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T18:11:16.183 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-09T18:11:16.183 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T18:11:16.183 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T18:11:16.183 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 8 is available 2026-03-09T18:11:16.183 INFO:teuthology.orchestra.run.vm03.stdout:Generating a dashboard self-signed certificate... 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: audit 2026-03-09T18:11:15.167653+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: audit 2026-03-09T18:11:15.167653+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: cephadm 2026-03-09T18:11:15.949576+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:18:11:15] ENGINE Bus STARTING 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: cephadm 2026-03-09T18:11:15.949576+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:18:11:15] ENGINE Bus STARTING 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.050867+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.050867+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: cluster 2026-03-09T18:11:16.123040+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e10: a(active, since 1.00772s) 2026-03-09T18:11:16.439 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:16 vm03 bash[20762]: cluster 2026-03-09T18:11:16.123040+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e10: a(active, since 1.00772s) 2026-03-09T18:11:16.465 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T18:11:16.465 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial admin user... 2026-03-09T18:11:16.873 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$SsnoVo6/Q9muoVi74RFNLepyjFgJbrLkthQVyF2yjrikCgZoJLIn.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773079876, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T18:11:16.873 INFO:teuthology.orchestra.run.vm03.stdout:Fetching dashboard port number... 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout:Ceph Dashboard is now available at: 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout: URL: https://vm03.local:8443/ 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout: User: admin 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout: Password: 8woas5hd3d 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.112 INFO:teuthology.orchestra.run.vm03.stdout:Saving cluster configuration to /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config directory 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: ceph telemetry on 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout:For more information see: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:17.418 INFO:teuthology.orchestra.run.vm03.stdout:Bootstrap complete. 2026-03-09T18:11:17.437 INFO:tasks.cephadm:Fetching config... 2026-03-09T18:11:17.437 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:11:17.437 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T18:11:17.439 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T18:11:17.439 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:11:17.439 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T18:11:17.484 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T18:11:17.484 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:11:17.485 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/keyring of=/dev/stdout 2026-03-09T18:11:17.532 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T18:11:17.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.125582+0000 mgr.a (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.125582+0000 mgr.a (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.129468+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.129468+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.159933+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.159933+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.160131+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Bus STARTED 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.160131+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Bus STARTED 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.160467+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Client ('192.168.123.103', 46630) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: cephadm 2026-03-09T18:11:16.160467+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:18:11:16] ENGINE Client ('192.168.123.103', 46630) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.399962+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.399962+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.427512+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.427512+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.429605+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.429605+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.685340+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.685340+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.837100+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:16.837100+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:17.075531+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.103:0/3512529434' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:17.075531+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.103:0/3512529434' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:17.378617+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.103:0/1073851209' entity='client.admin' 2026-03-09T18:11:17.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:17 vm03 bash[20762]: audit 2026-03-09T18:11:17.378617+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.103:0/1073851209' entity='client.admin' 2026-03-09T18:11:19.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:18 vm03 bash[20762]: cluster 2026-03-09T18:11:17.840934+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T18:11:19.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:18 vm03 bash[20762]: cluster 2026-03-09T18:11:17.840934+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T18:11:21.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:21 vm03 bash[20762]: audit 2026-03-09T18:11:20.165953+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:21.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:21 vm03 bash[20762]: audit 2026-03-09T18:11:20.165953+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:21.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:21 vm03 bash[20762]: audit 2026-03-09T18:11:20.716463+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:21.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:21 vm03 bash[20762]: audit 2026-03-09T18:11:20.716463+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:22.029 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:11:22.324 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T18:11:22.324 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T18:11:23.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:22 vm03 bash[20762]: cluster 2026-03-09T18:11:21.724039+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T18:11:23.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:22 vm03 bash[20762]: cluster 2026-03-09T18:11:21.724039+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T18:11:23.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:22 vm03 bash[20762]: audit 2026-03-09T18:11:22.271222+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.103:0/3968049404' entity='client.admin' 2026-03-09T18:11:23.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:22 vm03 bash[20762]: audit 2026-03-09T18:11:22.271222+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.103:0/3968049404' entity='client.admin' 2026-03-09T18:11:26.988 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:11:27.341 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm09 2026-03-09T18:11:27.341 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:11:27.341 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.conf 2026-03-09T18:11:27.345 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:11:27.345 DEBUG:teuthology.orchestra.run.vm09:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:27.388 INFO:tasks.cephadm:Adding host vm09 to orchestrator... 2026-03-09T18:11:27.389 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch host add vm09 2026-03-09T18:11:27.522 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.519036+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.522 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.519036+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.522 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.521569+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.522 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.521569+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.522225+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.522225+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.525354+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.525354+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.531302+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.531302+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.534714+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:26.534714+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.243456+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.243456+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.244132+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.244132+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.245026+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.245026+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.245470+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.245470+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.408564+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.408564+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.413548+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.413548+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.417560+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:27.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:27 vm03 bash[20762]: audit 2026-03-09T18:11:27.417560+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: audit 2026-03-09T18:11:27.240777+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: audit 2026-03-09T18:11:27.240777+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.246066+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.246066+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.283127+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.283127+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.335377+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.335377+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.370305+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.client.admin.keyring 2026-03-09T18:11:28.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:28 vm03 bash[20762]: cephadm 2026-03-09T18:11:27.370305+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.client.admin.keyring 2026-03-09T18:11:32.013 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:11:33.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:33 vm03 bash[20762]: audit 2026-03-09T18:11:32.272782+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:33.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:33 vm03 bash[20762]: audit 2026-03-09T18:11:32.272782+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:34.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:34 vm03 bash[20762]: cephadm 2026-03-09T18:11:33.187530+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T18:11:34.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:34 vm03 bash[20762]: cephadm 2026-03-09T18:11:33.187530+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm09 2026-03-09T18:11:35.019 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm09' with addr '192.168.123.109' 2026-03-09T18:11:35.074 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch host ls --format=json 2026-03-09T18:11:36.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: audit 2026-03-09T18:11:35.018919+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:36.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: audit 2026-03-09T18:11:35.018919+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:36.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: cephadm 2026-03-09T18:11:35.019349+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T18:11:36.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: cephadm 2026-03-09T18:11:35.019349+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm09 2026-03-09T18:11:36.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: audit 2026-03-09T18:11:35.019655+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:36.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: audit 2026-03-09T18:11:35.019655+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:36.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: audit 2026-03-09T18:11:35.329716+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:36.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:36 vm03 bash[20762]: audit 2026-03-09T18:11:35.329716+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:37.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:37 vm03 bash[20762]: cluster 2026-03-09T18:11:35.124028+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:37.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:37 vm03 bash[20762]: cluster 2026-03-09T18:11:35.124028+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:37.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:37 vm03 bash[20762]: audit 2026-03-09T18:11:36.604413+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:37.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:37 vm03 bash[20762]: audit 2026-03-09T18:11:36.604413+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:38 vm03 bash[20762]: cluster 2026-03-09T18:11:37.124193+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:38 vm03 bash[20762]: cluster 2026-03-09T18:11:37.124193+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:38 vm03 bash[20762]: audit 2026-03-09T18:11:37.171877+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:38 vm03 bash[20762]: audit 2026-03-09T18:11:37.171877+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:39.694 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:11:39.962 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:11:39.962 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.109", "hostname": "vm09", "labels": [], "status": ""}] 2026-03-09T18:11:40.019 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T18:11:40.019 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd crush tunables default 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cluster 2026-03-09T18:11:39.124354+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cluster 2026-03-09T18:11:39.124354+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.897438+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.897438+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.899128+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.899128+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.901442+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.901442+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.903210+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.903210+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.903637+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.903637+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm09", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.904182+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.904182+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.904540+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.904540+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:39.905078+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:39.905078+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:39.939917+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:39.939917+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.962611+0000 mgr.a (mgr.14150) 23 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:39.962611+0000 mgr.a (mgr.14150) 23 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:39.970742+0000 mgr.a (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:39.970742+0000 mgr.a (mgr.14150) 24 : cephadm [INF] Updating vm09:/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:40.013156+0000 mgr.a (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.client.admin.keyring 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: cephadm 2026-03-09T18:11:40.013156+0000 mgr.a (mgr.14150) 25 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.client.admin.keyring 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:40.049584+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:40.049584+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:40.052572+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:40.052572+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:40.061322+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:40 vm03 bash[20762]: audit 2026-03-09T18:11:40.061322+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:43.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:42 vm03 bash[20762]: cluster 2026-03-09T18:11:41.124507+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:43.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:42 vm03 bash[20762]: cluster 2026-03-09T18:11:41.124507+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:43.701 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:11:44.906 INFO:teuthology.orchestra.run.vm03.stderr:adjusted tunables profile to default 2026-03-09T18:11:44.960 INFO:tasks.cephadm:Adding mon.a on vm03 2026-03-09T18:11:44.960 INFO:tasks.cephadm:Adding mon.b on vm09 2026-03-09T18:11:44.960 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch apply mon '2;vm03:192.168.123.103=a;vm09:192.168.123.109=b' 2026-03-09T18:11:45.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:44 vm03 bash[20762]: cluster 2026-03-09T18:11:43.124654+0000 mgr.a (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:45.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:44 vm03 bash[20762]: cluster 2026-03-09T18:11:43.124654+0000 mgr.a (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:45.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:44 vm03 bash[20762]: audit 2026-03-09T18:11:43.953929+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.103:0/820791479' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T18:11:45.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:44 vm03 bash[20762]: audit 2026-03-09T18:11:43.953929+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.103:0/820791479' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T18:11:46.072 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:46.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:45 vm03 bash[20762]: audit 2026-03-09T18:11:44.906552+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.103:0/820791479' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T18:11:46.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:45 vm03 bash[20762]: audit 2026-03-09T18:11:44.906552+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.103:0/820791479' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T18:11:46.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:45 vm03 bash[20762]: cluster 2026-03-09T18:11:44.908067+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:11:46.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:45 vm03 bash[20762]: cluster 2026-03-09T18:11:44.908067+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:11:46.324 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mon update... 2026-03-09T18:11:46.411 DEBUG:teuthology.orchestra.run.vm09:mon.b> sudo journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.b.service 2026-03-09T18:11:46.411 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T18:11:46.411 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph mon dump -f json 2026-03-09T18:11:47.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: cluster 2026-03-09T18:11:45.124797+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: cluster 2026-03-09T18:11:45.124797+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.324552+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.324552+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.325171+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.325171+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.325945+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.325945+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.326312+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.326312+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.328516+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.328516+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.329443+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.329443+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.329795+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:47.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:46 vm03 bash[20762]: audit 2026-03-09T18:11:46.329795+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:47.573 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.b/config 2026-03-09T18:11:48.090 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:11:48.090 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":1,"fsid":"24200844-1be3-11f1-b4ce-2b35a0bfc236","modified":"2026-03-09T18:10:52.684992Z","created":"2026-03-09T18:10:52.684992Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-09T18:11:48.090 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 1 2026-03-09T18:11:48.166 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:48 vm09 bash[22981]: debug 2026-03-09T18:11:48.145+0000 7fd8cc197640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T18:11:48.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:46.321091+0000 mgr.a (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm03:192.168.123.103=a;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:48.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:46.321091+0000 mgr.a (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm03:192.168.123.103=a;vm09:192.168.123.109=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:11:48.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: cephadm 2026-03-09T18:11:46.322154+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm09:192.168.123.109=b;count:2 2026-03-09T18:11:48.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: cephadm 2026-03-09T18:11:46.322154+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm09:192.168.123.109=b;count:2 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: cephadm 2026-03-09T18:11:46.330241+0000 mgr.a (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: cephadm 2026-03-09T18:11:46.330241+0000 mgr.a (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm09 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.792465+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.792465+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.794439+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.794439+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.796541+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.796541+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.798403+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.798403+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.808065+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:48.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:47 vm03 bash[20762]: audit 2026-03-09T18:11:47.808065+0000 mon.a (mon.0) 139 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:49.181 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-09T18:11:49.181 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph mon dump -f json 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:48.159224+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:48.159224+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:48.159338+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:48.159338+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:48.159450+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:48.159450+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:49.125210+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:49.125210+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:49.152969+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:49.152969+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:50.152938+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:50.152938+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:50.156336+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:50.156336+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:51.125418+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:51.125418+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:51.153055+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:51.153055+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:52.153262+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:52.153262+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.153186+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.153186+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.163107+0000 mon.a (mon.0) 150 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.163107+0000 mon.a (mon.0) 150 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166236+0000 mon.a (mon.0) 151 : cluster [DBG] monmap epoch 2 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166236+0000 mon.a (mon.0) 151 : cluster [DBG] monmap epoch 2 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166257+0000 mon.a (mon.0) 152 : cluster [DBG] fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166257+0000 mon.a (mon.0) 152 : cluster [DBG] fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166267+0000 mon.a (mon.0) 153 : cluster [DBG] last_changed 2026-03-09T18:11:48.155430+0000 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166267+0000 mon.a (mon.0) 153 : cluster [DBG] last_changed 2026-03-09T18:11:48.155430+0000 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166278+0000 mon.a (mon.0) 154 : cluster [DBG] created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166278+0000 mon.a (mon.0) 154 : cluster [DBG] created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166287+0000 mon.a (mon.0) 155 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166287+0000 mon.a (mon.0) 155 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166296+0000 mon.a (mon.0) 156 : cluster [DBG] election_strategy: 1 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166296+0000 mon.a (mon.0) 156 : cluster [DBG] election_strategy: 1 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166306+0000 mon.a (mon.0) 157 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166306+0000 mon.a (mon.0) 157 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166317+0000 mon.a (mon.0) 158 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166317+0000 mon.a (mon.0) 158 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166576+0000 mon.a (mon.0) 159 : cluster [DBG] fsmap 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166576+0000 mon.a (mon.0) 159 : cluster [DBG] fsmap 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166597+0000 mon.a (mon.0) 160 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166597+0000 mon.a (mon.0) 160 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166723+0000 mon.a (mon.0) 161 : cluster [DBG] mgrmap e12: a(active, since 38s) 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166723+0000 mon.a (mon.0) 161 : cluster [DBG] mgrmap e12: a(active, since 38s) 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166797+0000 mon.a (mon.0) 162 : cluster [INF] overall HEALTH_OK 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: cluster 2026-03-09T18:11:53.166797+0000 mon.a (mon.0) 162 : cluster [INF] overall HEALTH_OK 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.169090+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.169090+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.171333+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.171333+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.173930+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.173930+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.174525+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.174525+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.174998+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:53.574 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:53 vm03 bash[20762]: audit 2026-03-09T18:11:53.174998+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:53.642 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.b/config 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:48.159224+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:48.159224+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:48.159338+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:48.159338+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:48.159450+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:48.159450+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:49.125210+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:49.125210+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:49.152969+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:49.152969+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:50.152938+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:50.152938+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:50.156336+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:50.156336+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:51.125418+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:51.125418+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:51.153055+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:51.153055+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:52.153262+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:52.153262+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.153186+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.153186+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.163107+0000 mon.a (mon.0) 150 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.163107+0000 mon.a (mon.0) 150 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166236+0000 mon.a (mon.0) 151 : cluster [DBG] monmap epoch 2 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166236+0000 mon.a (mon.0) 151 : cluster [DBG] monmap epoch 2 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166257+0000 mon.a (mon.0) 152 : cluster [DBG] fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166257+0000 mon.a (mon.0) 152 : cluster [DBG] fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166267+0000 mon.a (mon.0) 153 : cluster [DBG] last_changed 2026-03-09T18:11:48.155430+0000 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166267+0000 mon.a (mon.0) 153 : cluster [DBG] last_changed 2026-03-09T18:11:48.155430+0000 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166278+0000 mon.a (mon.0) 154 : cluster [DBG] created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166278+0000 mon.a (mon.0) 154 : cluster [DBG] created 2026-03-09T18:10:52.684992+0000 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166287+0000 mon.a (mon.0) 155 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166287+0000 mon.a (mon.0) 155 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166296+0000 mon.a (mon.0) 156 : cluster [DBG] election_strategy: 1 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166296+0000 mon.a (mon.0) 156 : cluster [DBG] election_strategy: 1 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166306+0000 mon.a (mon.0) 157 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166306+0000 mon.a (mon.0) 157 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166317+0000 mon.a (mon.0) 158 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166317+0000 mon.a (mon.0) 158 : cluster [DBG] 1: [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] mon.b 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166576+0000 mon.a (mon.0) 159 : cluster [DBG] fsmap 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166576+0000 mon.a (mon.0) 159 : cluster [DBG] fsmap 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166597+0000 mon.a (mon.0) 160 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166597+0000 mon.a (mon.0) 160 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T18:11:53.658 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166723+0000 mon.a (mon.0) 161 : cluster [DBG] mgrmap e12: a(active, since 38s) 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166723+0000 mon.a (mon.0) 161 : cluster [DBG] mgrmap e12: a(active, since 38s) 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166797+0000 mon.a (mon.0) 162 : cluster [INF] overall HEALTH_OK 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: cluster 2026-03-09T18:11:53.166797+0000 mon.a (mon.0) 162 : cluster [INF] overall HEALTH_OK 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.169090+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.169090+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.171333+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.171333+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.173930+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.173930+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.174525+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.174525+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.174998+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:53.659 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:53 vm09 bash[22981]: audit 2026-03-09T18:11:53.174998+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:53.965 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:11:53.966 INFO:teuthology.orchestra.run.vm09.stdout:{"epoch":2,"fsid":"24200844-1be3-11f1-b4ce-2b35a0bfc236","modified":"2026-03-09T18:11:48.155430Z","created":"2026-03-09T18:10:52.684992Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:3300","nonce":0},{"type":"v1","addr":"192.168.123.109:6789","nonce":0}]},"addr":"192.168.123.109:6789/0","public_addr":"192.168.123.109:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T18:11:53.966 INFO:teuthology.orchestra.run.vm09.stderr:dumped monmap epoch 2 2026-03-09T18:11:54.058 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T18:11:54.058 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph config generate-minimal-conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cluster 2026-03-09T18:11:53.125620+0000 mgr.a (mgr.14150) 35 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cluster 2026-03-09T18:11:53.125620+0000 mgr.a (mgr.14150) 35 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.175583+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.175583+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.175738+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.175738+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.212142+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.212142+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.215481+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.215481+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.255590+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.255590+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.259002+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.259002+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.261532+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.261532+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.264568+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.264568+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.267568+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.267568+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.279474+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.279474+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.281933+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.281933+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.284205+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.284205+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.286662+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.286662+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.287003+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.287003+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.287181+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.287181+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.287745+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.287745+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.288224+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.288224+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.288863+0000 mgr.a (mgr.14150) 41 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.288863+0000 mgr.a (mgr.14150) 41 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.681321+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.681321+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.685639+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.685639+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.686168+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.686168+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.686348+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.686348+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.686938+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.686938+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.687401+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.687401+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.687894+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm09 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: cephadm 2026-03-09T18:11:53.687894+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm09 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.963120+0000 mon.a (mon.0) 185 : audit [DBG] from='client.? 192.168.123.109:0/2000865963' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:53.963120+0000 mon.a (mon.0) 185 : audit [DBG] from='client.? 192.168.123.109:0/2000865963' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.130003+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.130003+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.133298+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.133298+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.134074+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.134074+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.134992+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.134992+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.135507+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.135507+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.138488+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.138488+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.153345+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:54.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:54 vm03 bash[20762]: audit 2026-03-09T18:11:54.153345+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:54.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cluster 2026-03-09T18:11:53.125620+0000 mgr.a (mgr.14150) 35 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:54.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cluster 2026-03-09T18:11:53.125620+0000 mgr.a (mgr.14150) 35 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:54.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.175583+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T18:11:54.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.175583+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.175738+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.175738+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm09:/etc/ceph/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.212142+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.212142+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.215481+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.215481+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Updating vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/config/ceph.conf 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.255590+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.255590+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.259002+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.259002+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.261532+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.261532+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.264568+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.264568+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.267568+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.267568+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.279474+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.279474+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.281933+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.281933+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.284205+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.284205+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.286662+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.286662+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.287003+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.287003+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.287181+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.287181+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.287745+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.287745+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.288224+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.288224+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.288863+0000 mgr.a (mgr.14150) 41 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.288863+0000 mgr.a (mgr.14150) 41 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.681321+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.681321+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.685639+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.685639+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.686168+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.686168+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.686348+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.686348+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.686938+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.686938+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.687401+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.687401+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.687894+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm09 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: cephadm 2026-03-09T18:11:53.687894+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm09 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.963120+0000 mon.a (mon.0) 185 : audit [DBG] from='client.? 192.168.123.109:0/2000865963' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:53.963120+0000 mon.a (mon.0) 185 : audit [DBG] from='client.? 192.168.123.109:0/2000865963' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.130003+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.130003+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.133298+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.133298+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.134074+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.134074+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.134992+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.134992+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.135507+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.135507+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.138488+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.138488+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:11:54.416 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.153345+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:54.416 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:54 vm09 bash[22981]: audit 2026-03-09T18:11:54.153345+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T18:11:55.572 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:11:55 vm03 bash[21034]: debug 2026-03-09T18:11:55.152+0000 7f197280b640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T18:11:56.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:56 vm03 bash[20762]: cluster 2026-03-09T18:11:55.125808+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:56.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:56 vm03 bash[20762]: cluster 2026-03-09T18:11:55.125808+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:56.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:56 vm09 bash[22981]: cluster 2026-03-09T18:11:55.125808+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:56.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:56 vm09 bash[22981]: cluster 2026-03-09T18:11:55.125808+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:58.674 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:11:58.930 INFO:teuthology.orchestra.run.vm03.stdout:# minimal ceph.conf for 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:58.931 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-09T18:11:58.931 INFO:teuthology.orchestra.run.vm03.stdout: fsid = 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:11:58.931 INFO:teuthology.orchestra.run.vm03.stdout: mon_host = [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] [v2:192.168.123.109:3300/0,v1:192.168.123.109:6789/0] 2026-03-09T18:11:59.045 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T18:11:59.045 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:11:59.045 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T18:11:59.052 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:11:59.052 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:59.103 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:11:59.104 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T18:11:59.111 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:11:59.111 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T18:11:59.160 INFO:tasks.cephadm:Adding mgr.a on vm03 2026-03-09T18:11:59.161 INFO:tasks.cephadm:Adding mgr.b on vm09 2026-03-09T18:11:59.161 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch apply mgr '2;vm03=a;vm09=b' 2026-03-09T18:11:59.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:59 vm03 bash[20762]: cluster 2026-03-09T18:11:57.126014+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:59.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:59 vm03 bash[20762]: cluster 2026-03-09T18:11:57.126014+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:59.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:59 vm03 bash[20762]: audit 2026-03-09T18:11:58.931013+0000 mon.a (mon.0) 193 : audit [DBG] from='client.? 192.168.123.103:0/1883380713' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:59.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:11:59 vm03 bash[20762]: audit 2026-03-09T18:11:58.931013+0000 mon.a (mon.0) 193 : audit [DBG] from='client.? 192.168.123.103:0/1883380713' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:59.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:59 vm09 bash[22981]: cluster 2026-03-09T18:11:57.126014+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:59.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:59 vm09 bash[22981]: cluster 2026-03-09T18:11:57.126014+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:11:59.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:59 vm09 bash[22981]: audit 2026-03-09T18:11:58.931013+0000 mon.a (mon.0) 193 : audit [DBG] from='client.? 192.168.123.103:0/1883380713' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:11:59.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:11:59 vm09 bash[22981]: audit 2026-03-09T18:11:58.931013+0000 mon.a (mon.0) 193 : audit [DBG] from='client.? 192.168.123.103:0/1883380713' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:00.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:00 vm09 bash[22981]: cluster 2026-03-09T18:11:59.126280+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:00.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:00 vm09 bash[22981]: cluster 2026-03-09T18:11:59.126280+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:00.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:00 vm03 bash[20762]: cluster 2026-03-09T18:11:59.126280+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:00.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:00 vm03 bash[20762]: cluster 2026-03-09T18:11:59.126280+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:02.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:02 vm03 bash[20762]: cluster 2026-03-09T18:12:01.126535+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:02.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:02 vm03 bash[20762]: cluster 2026-03-09T18:12:01.126535+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:02 vm09 bash[22981]: cluster 2026-03-09T18:12:01.126535+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:02 vm09 bash[22981]: cluster 2026-03-09T18:12:01.126535+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:02.803 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.b/config 2026-03-09T18:12:03.048 INFO:teuthology.orchestra.run.vm09.stdout:Scheduled mgr update... 2026-03-09T18:12:03.120 DEBUG:teuthology.orchestra.run.vm09:mgr.b> sudo journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.b.service 2026-03-09T18:12:03.120 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T18:12:03.120 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:12:03.120 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T18:12:03.123 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:12:03.124 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-09T18:12:03.168 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-09T18:12:03.168 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-09T18:12:03.168 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-09T18:12:03.168 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-09T18:12:03.168 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-09T18:12:03.169 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T18:12:03.169 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T18:12:03.169 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 18:05:23.536077724 +0000 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 18:05:22.448077724 +0000 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 18:05:22.448077724 +0000 2026-03-09T18:12:03.213 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T18:12:03.213 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T18:12:03.261 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T18:12:03.261 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T18:12:03.261 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000150132 s, 3.4 MB/s 2026-03-09T18:12:03.262 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T18:12:03.310 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-09T18:12:03.356 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 18:05:23.544077724 +0000 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 18:05:22.488077724 +0000 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 18:05:22.488077724 +0000 2026-03-09T18:12:03.357 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T18:12:03.357 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T18:12:03.388 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:03.405 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T18:12:03.405 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T18:12:03.405 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000146334 s, 3.5 MB/s 2026-03-09T18:12:03.406 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T18:12:03.453 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 18:05:23.536077724 +0000 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 18:05:22.488077724 +0000 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 18:05:22.488077724 +0000 2026-03-09T18:12:03.497 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T18:12:03.497 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T18:12:03.544 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T18:12:03.544 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T18:12:03.544 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000132548 s, 3.9 MB/s 2026-03-09T18:12:03.545 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T18:12:03.589 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 18:05:23.544077724 +0000 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 18:05:22.488077724 +0000 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 18:05:22.488077724 +0000 2026-03-09T18:12:03.637 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T18:12:03.637 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T18:12:03.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:03.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:03.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:03.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:03.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:03.686 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T18:12:03.686 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T18:12:03.686 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000156894 s, 3.3 MB/s 2026-03-09T18:12:03.687 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T18:12:03.734 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:12:03.734 DEBUG:teuthology.orchestra.run.vm09:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T18:12:03.737 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:12:03.737 DEBUG:teuthology.orchestra.run.vm09:> ls /dev/[sv]d? 2026-03-09T18:12:03.783 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vda 2026-03-09T18:12:03.783 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdb 2026-03-09T18:12:03.783 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdc 2026-03-09T18:12:03.783 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vdd 2026-03-09T18:12:03.783 INFO:teuthology.orchestra.run.vm09.stdout:/dev/vde 2026-03-09T18:12:03.783 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T18:12:03.783 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T18:12:03.783 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdb 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdb 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:05:48.139605057 +0000 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:05:47.207605057 +0000 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:05:47.207605057 +0000 2026-03-09T18:12:03.832 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T18:12:03.832 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T18:12:03.887 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:12:03.888 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:12:03.888 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000157094 s, 3.3 MB/s 2026-03-09T18:12:03.888 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T18:12:03.944 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdc 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdc 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:05:48.147605057 +0000 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:05:47.203605057 +0000 2026-03-09T18:12:03.992 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:05:47.203605057 +0000 2026-03-09T18:12:03.993 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T18:12:03.993 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T18:12:04.040 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:04.040 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:04.040 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:03 vm09 systemd[1]: Started Ceph mgr.b for 24200844-1be3-11f1-b4ce-2b35a0bfc236. 2026-03-09T18:12:04.043 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:12:04.043 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:12:04.043 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.00055561 s, 922 kB/s 2026-03-09T18:12:04.053 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T18:12:04.111 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vdd 2026-03-09T18:12:04.149 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.043772+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:04.149 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.043772+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: cephadm 2026-03-09T18:12:03.044552+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm09=b;count:2 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: cephadm 2026-03-09T18:12:03.044552+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm09=b;count:2 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.048139+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.048139+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.048777+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.048777+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.049578+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.049578+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.049971+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.049971+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.053432+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.053432+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.054350+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.054350+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.055862+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.055862+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.057430+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.057430+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.057901+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.057901+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: cephadm 2026-03-09T18:12:03.058409+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm09 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: cephadm 2026-03-09T18:12:03.058409+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm09 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.855404+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.855404+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.859487+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.859487+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.863816+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.863816+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.875929+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.875929+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.888642+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[22981]: audit 2026-03-09T18:12:03.888642+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[23697]: debug 2026-03-09T18:12:04.113+0000 7f74c7d83140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T18:12:04.150 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[23697]: debug 2026-03-09T18:12:04.145+0000 7f74c7d83140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vdd 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:05:48.139605057 +0000 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:05:47.203605057 +0000 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:05:47.203605057 +0000 2026-03-09T18:12:04.164 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T18:12:04.164 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T18:12:04.212 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:12:04.212 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:12:04.212 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000162584 s, 3.1 MB/s 2026-03-09T18:12:04.213 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T18:12:04.263 DEBUG:teuthology.orchestra.run.vm09:> stat /dev/vde 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout: File: /dev/vde 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout:Access: 2026-03-09 18:05:48.147605057 +0000 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout:Modify: 2026-03-09 18:05:47.203605057 +0000 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout:Change: 2026-03-09 18:05:47.203605057 +0000 2026-03-09T18:12:04.309 INFO:teuthology.orchestra.run.vm09.stdout: Birth: - 2026-03-09T18:12:04.309 DEBUG:teuthology.orchestra.run.vm09:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.043772+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.043772+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24103 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm09=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: cephadm 2026-03-09T18:12:03.044552+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm09=b;count:2 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: cephadm 2026-03-09T18:12:03.044552+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm09=b;count:2 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.048139+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.048139+0000 mon.a (mon.0) 194 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.048777+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.048777+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.049578+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.049578+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.049971+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.049971+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.053432+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.053432+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.054350+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.054350+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.055862+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.055862+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.057430+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.057430+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.057901+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.057901+0000 mon.a (mon.0) 202 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: cephadm 2026-03-09T18:12:03.058409+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm09 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: cephadm 2026-03-09T18:12:03.058409+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm09 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.855404+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.855404+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.859487+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.859487+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.863816+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.863816+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.875929+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.875929+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.888642+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:04 vm03 bash[20762]: audit 2026-03-09T18:12:03.888642+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:04.357 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[23697]: debug 2026-03-09T18:12:04.277+0000 7f74c7d83140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T18:12:04.358 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records in 2026-03-09T18:12:04.358 INFO:teuthology.orchestra.run.vm09.stderr:1+0 records out 2026-03-09T18:12:04.358 INFO:teuthology.orchestra.run.vm09.stderr:512 bytes copied, 0.000174637 s, 2.9 MB/s 2026-03-09T18:12:04.359 DEBUG:teuthology.orchestra.run.vm09:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T18:12:04.411 INFO:tasks.cephadm:Deploying osd.0 on vm03 with /dev/vde... 2026-03-09T18:12:04.411 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- lvm zap /dev/vde 2026-03-09T18:12:04.584 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:04 vm09 bash[23697]: debug 2026-03-09T18:12:04.581+0000 7f74c7d83140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T18:12:05.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:05 vm03 bash[20762]: cluster 2026-03-09T18:12:03.126791+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:05.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:05 vm03 bash[20762]: cluster 2026-03-09T18:12:03.126791+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.037+0000 7f74c7d83140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.125+0000 7f74c7d83140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: from numpy import show_config as show_numpy_config 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.257+0000 7f74c7d83140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[22981]: cluster 2026-03-09T18:12:03.126791+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:05.401 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[22981]: cluster 2026-03-09T18:12:03.126791+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:05.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.397+0000 7f74c7d83140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T18:12:05.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.437+0000 7f74c7d83140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T18:12:05.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.477+0000 7f74c7d83140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T18:12:05.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.517+0000 7f74c7d83140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T18:12:05.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:05 vm09 bash[23697]: debug 2026-03-09T18:12:05.569+0000 7f74c7d83140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T18:12:06.276 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.005+0000 7f74c7d83140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T18:12:06.276 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.041+0000 7f74c7d83140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T18:12:06.276 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.077+0000 7f74c7d83140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T18:12:06.276 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.229+0000 7f74c7d83140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T18:12:06.581 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.273+0000 7f74c7d83140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T18:12:06.581 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.313+0000 7f74c7d83140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T18:12:06.581 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.421+0000 7f74c7d83140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:12:06.833 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.577+0000 7f74c7d83140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T18:12:06.833 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.753+0000 7f74c7d83140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T18:12:06.833 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.789+0000 7f74c7d83140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T18:12:07.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:07 vm09 bash[22981]: cluster 2026-03-09T18:12:05.126985+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:07.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:07 vm09 bash[22981]: cluster 2026-03-09T18:12:05.126985+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:07.164 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.829+0000 7f74c7d83140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T18:12:07.164 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:06 vm09 bash[23697]: debug 2026-03-09T18:12:06.977+0000 7f74c7d83140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T18:12:07.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:07 vm03 bash[20762]: cluster 2026-03-09T18:12:05.126985+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:07.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:07 vm03 bash[20762]: cluster 2026-03-09T18:12:05.126985+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:07.664 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:12:07 vm09 bash[23697]: debug 2026-03-09T18:12:07.205+0000 7f74c7d83140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T18:12:08.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: cluster 2026-03-09T18:12:07.208999+0000 mon.a (mon.0) 208 : cluster [DBG] Standby manager daemon b started 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: cluster 2026-03-09T18:12:07.208999+0000 mon.a (mon.0) 208 : cluster [DBG] Standby manager daemon b started 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.211119+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.211119+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.211706+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.211706+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.212788+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.212788+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.213389+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:12:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:08 vm03 bash[20762]: audit 2026-03-09T18:12:07.213389+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: cluster 2026-03-09T18:12:07.208999+0000 mon.a (mon.0) 208 : cluster [DBG] Standby manager daemon b started 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: cluster 2026-03-09T18:12:07.208999+0000 mon.a (mon.0) 208 : cluster [DBG] Standby manager daemon b started 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.211119+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.211119+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.211706+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.211706+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.212788+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.212788+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.213389+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:12:08.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:08 vm09 bash[22981]: audit 2026-03-09T18:12:07.213389+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.109:0/1326493767' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T18:12:09.028 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: cluster 2026-03-09T18:12:07.127250+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: cluster 2026-03-09T18:12:07.127250+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: cluster 2026-03-09T18:12:08.073036+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: cluster 2026-03-09T18:12:08.073036+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.073144+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.073144+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.174144+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.174144+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.813243+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.813243+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.817451+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.817451+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.818493+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.818493+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.819153+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.819153+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.822775+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.822775+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.833451+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.833451+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.833954+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.833954+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.834283+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.324 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:09 vm03 bash[20762]: audit 2026-03-09T18:12:08.834283+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: cluster 2026-03-09T18:12:07.127250+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: cluster 2026-03-09T18:12:07.127250+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: cluster 2026-03-09T18:12:08.073036+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: cluster 2026-03-09T18:12:08.073036+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.073144+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.073144+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.174144+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.174144+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.813243+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.813243+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.817451+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.817451+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.818493+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.818493+0000 mon.a (mon.0) 214 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.819153+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.819153+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.822775+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.822775+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.833451+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.833451+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.833954+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.833954+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.834283+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:09 vm09 bash[22981]: audit 2026-03-09T18:12:08.834283+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:09.988 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:12:10.005 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: cephadm 2026-03-09T18:12:08.833173+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: cephadm 2026-03-09T18:12:08.833173+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: cephadm 2026-03-09T18:12:08.834660+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: cephadm 2026-03-09T18:12:08.834660+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: cluster 2026-03-09T18:12:09.127473+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: cluster 2026-03-09T18:12:09.127473+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.268542+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.268542+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.273004+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.273004+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.273836+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.273836+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.274905+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.274905+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.275569+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.275569+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.279078+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:10 vm03 bash[20762]: audit 2026-03-09T18:12:09.279078+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: cephadm 2026-03-09T18:12:08.833173+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: cephadm 2026-03-09T18:12:08.833173+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: cephadm 2026-03-09T18:12:08.834660+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: cephadm 2026-03-09T18:12:08.834660+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: cluster 2026-03-09T18:12:09.127473+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: cluster 2026-03-09T18:12:09.127473+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.268542+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.268542+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.273004+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.273004+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.273836+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.273836+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.274905+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.274905+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.275569+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.275569+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.279078+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:10.415 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:10 vm09 bash[22981]: audit 2026-03-09T18:12:09.279078+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:12.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:12 vm03 bash[20762]: cluster 2026-03-09T18:12:11.127689+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:12.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:12 vm03 bash[20762]: cluster 2026-03-09T18:12:11.127689+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:12.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:12 vm09 bash[22981]: cluster 2026-03-09T18:12:11.127689+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:12.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:12 vm09 bash[22981]: cluster 2026-03-09T18:12:11.127689+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:14.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:14 vm03 bash[20762]: cluster 2026-03-09T18:12:13.127947+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:14.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:14 vm03 bash[20762]: cluster 2026-03-09T18:12:13.127947+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:14.619 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:12:14.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:14 vm09 bash[22981]: cluster 2026-03-09T18:12:13.127947+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:14.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:14 vm09 bash[22981]: cluster 2026-03-09T18:12:13.127947+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:15.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.858862+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:15.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.858862+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:15.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.860205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.860205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.861686+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.861686+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.862141+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:15.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:15 vm03 bash[20762]: audit 2026-03-09T18:12:14.862141+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.858862+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.858862+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.860205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.860205+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.861686+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.861686+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.862141+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:15.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:15 vm09 bash[22981]: audit 2026-03-09T18:12:14.862141+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:16.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:16 vm03 bash[20762]: cluster 2026-03-09T18:12:15.128149+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:16.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:16 vm03 bash[20762]: cluster 2026-03-09T18:12:15.128149+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:16.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:16 vm09 bash[22981]: cluster 2026-03-09T18:12:15.128149+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:16.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:16 vm09 bash[22981]: cluster 2026-03-09T18:12:15.128149+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:18.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:18 vm03 bash[20762]: cluster 2026-03-09T18:12:17.128363+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:18.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:18 vm03 bash[20762]: cluster 2026-03-09T18:12:17.128363+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:18.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:18 vm09 bash[22981]: cluster 2026-03-09T18:12:17.128363+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:18.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:18 vm09 bash[22981]: cluster 2026-03-09T18:12:17.128363+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:20.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: cluster 2026-03-09T18:12:19.128582+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:20.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: cluster 2026-03-09T18:12:19.128582+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.269211+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]: dispatch 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.269211+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]: dispatch 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.271953+0000 mon.a (mon.0) 230 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]': finished 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.271953+0000 mon.a (mon.0) 230 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]': finished 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: cluster 2026-03-09T18:12:19.275428+0000 mon.a (mon.0) 231 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: cluster 2026-03-09T18:12:19.275428+0000 mon.a (mon.0) 231 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.275622+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.275622+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.861712+0000 mon.a (mon.0) 233 : audit [DBG] from='client.? 192.168.123.103:0/3986830111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:20 vm03 bash[20762]: audit 2026-03-09T18:12:19.861712+0000 mon.a (mon.0) 233 : audit [DBG] from='client.? 192.168.123.103:0/3986830111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: cluster 2026-03-09T18:12:19.128582+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: cluster 2026-03-09T18:12:19.128582+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.269211+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]: dispatch 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.269211+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]: dispatch 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.271953+0000 mon.a (mon.0) 230 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]': finished 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.271953+0000 mon.a (mon.0) 230 : audit [INF] from='client.? 192.168.123.103:0/2753391349' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6dcc3e3a-5726-4fc0-b79f-03da6ded5591"}]': finished 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: cluster 2026-03-09T18:12:19.275428+0000 mon.a (mon.0) 231 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: cluster 2026-03-09T18:12:19.275428+0000 mon.a (mon.0) 231 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.275622+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.275622+0000 mon.a (mon.0) 232 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.861712+0000 mon.a (mon.0) 233 : audit [DBG] from='client.? 192.168.123.103:0/3986830111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:20 vm09 bash[22981]: audit 2026-03-09T18:12:19.861712+0000 mon.a (mon.0) 233 : audit [DBG] from='client.? 192.168.123.103:0/3986830111' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:22.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:22 vm03 bash[20762]: cluster 2026-03-09T18:12:21.128800+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:22.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:22 vm03 bash[20762]: cluster 2026-03-09T18:12:21.128800+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:22.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:22 vm09 bash[22981]: cluster 2026-03-09T18:12:21.128800+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:22.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:22 vm09 bash[22981]: cluster 2026-03-09T18:12:21.128800+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:24.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:24 vm03 bash[20762]: cluster 2026-03-09T18:12:23.129056+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:24.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:24 vm03 bash[20762]: cluster 2026-03-09T18:12:23.129056+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:24.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:24 vm09 bash[22981]: cluster 2026-03-09T18:12:23.129056+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:24.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:24 vm09 bash[22981]: cluster 2026-03-09T18:12:23.129056+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:26.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:26 vm03 bash[20762]: cluster 2026-03-09T18:12:25.129342+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:26.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:26 vm03 bash[20762]: cluster 2026-03-09T18:12:25.129342+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:26.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:26 vm09 bash[22981]: cluster 2026-03-09T18:12:25.129342+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:26.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:26 vm09 bash[22981]: cluster 2026-03-09T18:12:25.129342+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:28.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 bash[20762]: cluster 2026-03-09T18:12:27.129596+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:28.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 bash[20762]: cluster 2026-03-09T18:12:27.129596+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:28.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 bash[20762]: audit 2026-03-09T18:12:28.217852+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:12:28.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 bash[20762]: audit 2026-03-09T18:12:28.217852+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:12:28.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 bash[20762]: audit 2026-03-09T18:12:28.218434+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:28.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 bash[20762]: audit 2026-03-09T18:12:28.218434+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:28.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:28 vm09 bash[22981]: cluster 2026-03-09T18:12:27.129596+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:28.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:28 vm09 bash[22981]: cluster 2026-03-09T18:12:27.129596+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:28.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:28 vm09 bash[22981]: audit 2026-03-09T18:12:28.217852+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:12:28.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:28 vm09 bash[22981]: audit 2026-03-09T18:12:28.217852+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T18:12:28.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:28 vm09 bash[22981]: audit 2026-03-09T18:12:28.218434+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:28.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:28 vm09 bash[22981]: audit 2026-03-09T18:12:28.218434+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:29.043 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:28 vm03 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:29.043 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:12:28 vm03 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:29.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:29 vm03 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:29.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:29 vm03 bash[20762]: cephadm 2026-03-09T18:12:28.218982+0000 mgr.a (mgr.14150) 67 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T18:12:29.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:29 vm03 bash[20762]: cephadm 2026-03-09T18:12:28.218982+0000 mgr.a (mgr.14150) 67 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T18:12:29.323 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 18:12:29 vm03 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:12:29.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:29 vm09 bash[22981]: cephadm 2026-03-09T18:12:28.218982+0000 mgr.a (mgr.14150) 67 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T18:12:29.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:29 vm09 bash[22981]: cephadm 2026-03-09T18:12:28.218982+0000 mgr.a (mgr.14150) 67 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: cluster 2026-03-09T18:12:29.129829+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: cluster 2026-03-09T18:12:29.129829+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: audit 2026-03-09T18:12:29.274036+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: audit 2026-03-09T18:12:29.274036+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: audit 2026-03-09T18:12:29.281930+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: audit 2026-03-09T18:12:29.281930+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: audit 2026-03-09T18:12:29.290009+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.512 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:30 vm03 bash[20762]: audit 2026-03-09T18:12:29.290009+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: cluster 2026-03-09T18:12:29.129829+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: cluster 2026-03-09T18:12:29.129829+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: audit 2026-03-09T18:12:29.274036+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: audit 2026-03-09T18:12:29.274036+0000 mon.a (mon.0) 236 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: audit 2026-03-09T18:12:29.281930+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: audit 2026-03-09T18:12:29.281930+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: audit 2026-03-09T18:12:29.290009+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:30.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:30 vm09 bash[22981]: audit 2026-03-09T18:12:29.290009+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:32.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:32 vm03 bash[20762]: cluster 2026-03-09T18:12:31.130016+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:32.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:32 vm03 bash[20762]: cluster 2026-03-09T18:12:31.130016+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:32.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:32 vm09 bash[22981]: cluster 2026-03-09T18:12:31.130016+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:32.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:32 vm09 bash[22981]: cluster 2026-03-09T18:12:31.130016+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:34.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:34 vm03 bash[20762]: cluster 2026-03-09T18:12:33.130451+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:34.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:34 vm03 bash[20762]: cluster 2026-03-09T18:12:33.130451+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:34.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:34 vm03 bash[20762]: audit 2026-03-09T18:12:33.375992+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:12:34.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:34 vm03 bash[20762]: audit 2026-03-09T18:12:33.375992+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:12:34.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:34 vm09 bash[22981]: cluster 2026-03-09T18:12:33.130451+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:34.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:34 vm09 bash[22981]: cluster 2026-03-09T18:12:33.130451+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:34.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:34 vm09 bash[22981]: audit 2026-03-09T18:12:33.375992+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:12:34.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:34 vm09 bash[22981]: audit 2026-03-09T18:12:33.375992+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T18:12:35.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: audit 2026-03-09T18:12:34.250801+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:12:35.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: audit 2026-03-09T18:12:34.250801+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:12:35.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: cluster 2026-03-09T18:12:34.252844+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:12:35.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: cluster 2026-03-09T18:12:34.252844+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:12:35.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: audit 2026-03-09T18:12:34.252969+0000 mon.a (mon.0) 242 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T18:12:35.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: audit 2026-03-09T18:12:34.252969+0000 mon.a (mon.0) 242 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T18:12:35.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: audit 2026-03-09T18:12:34.253056+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:35.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:35 vm03 bash[20762]: audit 2026-03-09T18:12:34.253056+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: audit 2026-03-09T18:12:34.250801+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: audit 2026-03-09T18:12:34.250801+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: cluster 2026-03-09T18:12:34.252844+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: cluster 2026-03-09T18:12:34.252844+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: audit 2026-03-09T18:12:34.252969+0000 mon.a (mon.0) 242 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: audit 2026-03-09T18:12:34.252969+0000 mon.a (mon.0) 242 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: audit 2026-03-09T18:12:34.253056+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:35.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:35 vm09 bash[22981]: audit 2026-03-09T18:12:34.253056+0000 mon.a (mon.0) 243 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.436 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 0 on host 'vm03' 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: cluster 2026-03-09T18:12:35.130736+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: cluster 2026-03-09T18:12:35.130736+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.253836+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.253836+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: cluster 2026-03-09T18:12:35.255976+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: cluster 2026-03-09T18:12:35.255976+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.257035+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.257035+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.261039+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.261039+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.396656+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.396656+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.400041+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.400041+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.803609+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.803609+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.804231+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.804231+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.809011+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.450 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:36 vm03 bash[20762]: audit 2026-03-09T18:12:35.809011+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.540 DEBUG:teuthology.orchestra.run.vm03:osd.0> sudo journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@osd.0.service 2026-03-09T18:12:36.541 INFO:tasks.cephadm:Deploying osd.1 on vm09 with /dev/vde... 2026-03-09T18:12:36.541 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- lvm zap /dev/vde 2026-03-09T18:12:36.547 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: cluster 2026-03-09T18:12:35.130736+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:36.547 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: cluster 2026-03-09T18:12:35.130736+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:36.547 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.253836+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T18:12:36.547 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.253836+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T18:12:36.547 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: cluster 2026-03-09T18:12:35.255976+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:12:36.547 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: cluster 2026-03-09T18:12:35.255976+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.257035+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.257035+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.261039+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.261039+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.396656+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.396656+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.400041+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.400041+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.803609+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.803609+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.804231+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.804231+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.809011+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:36.548 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:36 vm09 bash[22981]: audit 2026-03-09T18:12:35.809011+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:34.367932+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:34.367932+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:34.367986+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:34.367986+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.260147+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.260147+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:36.276148+0000 mon.a (mon.0) 254 : cluster [INF] osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692] boot 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:36.276148+0000 mon.a (mon.0) 254 : cluster [INF] osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692] boot 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:36.276191+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: cluster 2026-03-09T18:12:36.276191+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.276246+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.276246+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.423987+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.423987+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.429134+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.429134+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.433032+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:37 vm03 bash[20762]: audit 2026-03-09T18:12:36.433032+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:34.367932+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:34.367932+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:34.367986+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:34.367986+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.260147+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.260147+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:36.276148+0000 mon.a (mon.0) 254 : cluster [INF] osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692] boot 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:36.276148+0000 mon.a (mon.0) 254 : cluster [INF] osd.0 [v2:192.168.123.103:6802/3690867692,v1:192.168.123.103:6803/3690867692] boot 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:36.276191+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: cluster 2026-03-09T18:12:36.276191+0000 mon.a (mon.0) 255 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.276246+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.276246+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.423987+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.423987+0000 mon.a (mon.0) 257 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.429134+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.429134+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.433032+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:37.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:37 vm09 bash[22981]: audit 2026-03-09T18:12:36.433032+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:38 vm03 bash[20762]: cluster 2026-03-09T18:12:37.130951+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:38 vm03 bash[20762]: cluster 2026-03-09T18:12:37.130951+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:38 vm03 bash[20762]: cluster 2026-03-09T18:12:37.279035+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:12:38.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:38 vm03 bash[20762]: cluster 2026-03-09T18:12:37.279035+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:12:38.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:38 vm09 bash[22981]: cluster 2026-03-09T18:12:37.130951+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:38.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:38 vm09 bash[22981]: cluster 2026-03-09T18:12:37.130951+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T18:12:38.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:38 vm09 bash[22981]: cluster 2026-03-09T18:12:37.279035+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:12:38.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:38 vm09 bash[22981]: cluster 2026-03-09T18:12:37.279035+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T18:12:40.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:40 vm03 bash[20762]: cluster 2026-03-09T18:12:39.131249+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:40.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:40 vm03 bash[20762]: cluster 2026-03-09T18:12:39.131249+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:40.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:40 vm09 bash[22981]: cluster 2026-03-09T18:12:39.131249+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:40.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:40 vm09 bash[22981]: cluster 2026-03-09T18:12:39.131249+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:41.154 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.b/config 2026-03-09T18:12:41.990 INFO:teuthology.orchestra.run.vm09.stdout: 2026-03-09T18:12:42.003 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph orch daemon add osd vm09:/dev/vde 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: cluster 2026-03-09T18:12:41.131463+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: cluster 2026-03-09T18:12:41.131463+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.085609+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.085609+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.090030+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.090030+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.090977+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.090977+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.092033+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.092033+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.092422+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.092422+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.096153+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:42 vm09 bash[22981]: audit 2026-03-09T18:12:42.096153+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: cluster 2026-03-09T18:12:41.131463+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: cluster 2026-03-09T18:12:41.131463+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.085609+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.085609+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.090030+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.290 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.090030+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.090977+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.090977+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.092033+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.092033+0000 mon.a (mon.0) 264 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.092422+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.092422+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.096153+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:42.291 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:42 vm03 bash[20762]: audit 2026-03-09T18:12:42.096153+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:12:43.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:43 vm03 bash[20762]: cephadm 2026-03-09T18:12:42.079856+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T18:12:43.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:43 vm03 bash[20762]: cephadm 2026-03-09T18:12:42.079856+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T18:12:43.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:43 vm03 bash[20762]: cephadm 2026-03-09T18:12:42.091354+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T18:12:43.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:43 vm03 bash[20762]: cephadm 2026-03-09T18:12:42.091354+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T18:12:43.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:43 vm03 bash[20762]: cephadm 2026-03-09T18:12:42.091729+0000 mgr.a (mgr.14150) 77 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T18:12:43.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:43 vm03 bash[20762]: cephadm 2026-03-09T18:12:42.091729+0000 mgr.a (mgr.14150) 77 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T18:12:43.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:43 vm09 bash[22981]: cephadm 2026-03-09T18:12:42.079856+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T18:12:43.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:43 vm09 bash[22981]: cephadm 2026-03-09T18:12:42.079856+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T18:12:43.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:43 vm09 bash[22981]: cephadm 2026-03-09T18:12:42.091354+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T18:12:43.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:43 vm09 bash[22981]: cephadm 2026-03-09T18:12:42.091354+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T18:12:43.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:43 vm09 bash[22981]: cephadm 2026-03-09T18:12:42.091729+0000 mgr.a (mgr.14150) 77 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T18:12:43.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:43 vm09 bash[22981]: cephadm 2026-03-09T18:12:42.091729+0000 mgr.a (mgr.14150) 77 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T18:12:44.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:44 vm03 bash[20762]: cluster 2026-03-09T18:12:43.131744+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:44.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:44 vm03 bash[20762]: cluster 2026-03-09T18:12:43.131744+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:44.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:44 vm09 bash[22981]: cluster 2026-03-09T18:12:43.131744+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:44.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:44 vm09 bash[22981]: cluster 2026-03-09T18:12:43.131744+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:46.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:46 vm03 bash[20762]: cluster 2026-03-09T18:12:45.131940+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:46.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:46 vm03 bash[20762]: cluster 2026-03-09T18:12:45.131940+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:46.660 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.b/config 2026-03-09T18:12:46.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:46 vm09 bash[22981]: cluster 2026-03-09T18:12:45.131940+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:46.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:46 vm09 bash[22981]: cluster 2026-03-09T18:12:45.131940+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:47.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.911981+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.24123 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.911981+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.24123 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.913471+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.913471+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.915275+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.915275+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.915811+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:47.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:47 vm03 bash[20762]: audit 2026-03-09T18:12:46.915811+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.911981+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.24123 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.911981+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.24123 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm09:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.913471+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.913471+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.915275+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.915275+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.915811+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:47.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:47 vm09 bash[22981]: audit 2026-03-09T18:12:46.915811+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:12:48.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:48 vm03 bash[20762]: cluster 2026-03-09T18:12:47.132208+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:48.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:48 vm03 bash[20762]: cluster 2026-03-09T18:12:47.132208+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:48.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:48 vm09 bash[22981]: cluster 2026-03-09T18:12:47.132208+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:48.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:48 vm09 bash[22981]: cluster 2026-03-09T18:12:47.132208+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:50.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:50 vm03 bash[20762]: cluster 2026-03-09T18:12:49.132510+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:50.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:50 vm03 bash[20762]: cluster 2026-03-09T18:12:49.132510+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:50.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:50 vm09 bash[22981]: cluster 2026-03-09T18:12:49.132510+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:50.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:50 vm09 bash[22981]: cluster 2026-03-09T18:12:49.132510+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:52.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: cluster 2026-03-09T18:12:51.132719+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:52.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: cluster 2026-03-09T18:12:51.132719+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:52.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.299010+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/2545247710' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.299010+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/2545247710' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.299226+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.299226+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.301952+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]': finished 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.301952+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]': finished 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: cluster 2026-03-09T18:12:52.304283+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: cluster 2026-03-09T18:12:52.304283+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.304408+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:12:52.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:52 vm03 bash[20762]: audit 2026-03-09T18:12:52.304408+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: cluster 2026-03-09T18:12:51.132719+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: cluster 2026-03-09T18:12:51.132719+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.299010+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/2545247710' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.299010+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.109:0/2545247710' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.299226+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.299226+0000 mon.a (mon.0) 270 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]: dispatch 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.301952+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]': finished 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.301952+0000 mon.a (mon.0) 271 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73ff702d-91cb-4376-b927-a763bfb3015c"}]': finished 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: cluster 2026-03-09T18:12:52.304283+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: cluster 2026-03-09T18:12:52.304283+0000 mon.a (mon.0) 272 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.304408+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:12:52.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:52 vm09 bash[22981]: audit 2026-03-09T18:12:52.304408+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:12:53.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:53 vm09 bash[22981]: audit 2026-03-09T18:12:52.920508+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2198620939' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:53.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:53 vm09 bash[22981]: audit 2026-03-09T18:12:52.920508+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2198620939' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:53.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:53 vm03 bash[20762]: audit 2026-03-09T18:12:52.920508+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2198620939' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:53.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:53 vm03 bash[20762]: audit 2026-03-09T18:12:52.920508+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.109:0/2198620939' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T18:12:54.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:54 vm09 bash[22981]: cluster 2026-03-09T18:12:53.132946+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:54.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:54 vm09 bash[22981]: cluster 2026-03-09T18:12:53.132946+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:54.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:54 vm03 bash[20762]: cluster 2026-03-09T18:12:53.132946+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:54.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:54 vm03 bash[20762]: cluster 2026-03-09T18:12:53.132946+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:56.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:56 vm09 bash[22981]: cluster 2026-03-09T18:12:55.133173+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:56.391 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:56 vm09 bash[22981]: cluster 2026-03-09T18:12:55.133173+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:56.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:56 vm03 bash[20762]: cluster 2026-03-09T18:12:55.133173+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:56.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:56 vm03 bash[20762]: cluster 2026-03-09T18:12:55.133173+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:58.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:58 vm09 bash[22981]: cluster 2026-03-09T18:12:57.133415+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:58.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:12:58 vm09 bash[22981]: cluster 2026-03-09T18:12:57.133415+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:58.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:58 vm03 bash[20762]: cluster 2026-03-09T18:12:57.133415+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:12:58.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:12:58 vm03 bash[20762]: cluster 2026-03-09T18:12:57.133415+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:00 vm03 bash[20762]: cluster 2026-03-09T18:12:59.133681+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:00.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:00 vm03 bash[20762]: cluster 2026-03-09T18:12:59.133681+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:00.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:00 vm09 bash[22981]: cluster 2026-03-09T18:12:59.133681+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:00.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:00 vm09 bash[22981]: cluster 2026-03-09T18:12:59.133681+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:01.560 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:01 vm09 bash[22981]: audit 2026-03-09T18:13:01.316475+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:13:01.560 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:01 vm09 bash[22981]: audit 2026-03-09T18:13:01.316475+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:13:01.560 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:01 vm09 bash[22981]: audit 2026-03-09T18:13:01.317040+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:01.560 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:01 vm09 bash[22981]: audit 2026-03-09T18:13:01.317040+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:01.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:01 vm03 bash[20762]: audit 2026-03-09T18:13:01.316475+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:13:01.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:01 vm03 bash[20762]: audit 2026-03-09T18:13:01.316475+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T18:13:01.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:01 vm03 bash[20762]: audit 2026-03-09T18:13:01.317040+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:01.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:01 vm03 bash[20762]: audit 2026-03-09T18:13:01.317040+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:02.114 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:13:02.114 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:13:02 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:13:02.368 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:13:02.368 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:13:02 vm09 systemd[1]: /etc/systemd/system/ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: cluster 2026-03-09T18:13:01.133882+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: cluster 2026-03-09T18:13:01.133882+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: cephadm 2026-03-09T18:13:01.317426+0000 mgr.a (mgr.14150) 89 : cephadm [INF] Deploying daemon osd.1 on vm09 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: cephadm 2026-03-09T18:13:01.317426+0000 mgr.a (mgr.14150) 89 : cephadm [INF] Deploying daemon osd.1 on vm09 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: audit 2026-03-09T18:13:02.350521+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: audit 2026-03-09T18:13:02.350521+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: audit 2026-03-09T18:13:02.356081+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: audit 2026-03-09T18:13:02.356081+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: audit 2026-03-09T18:13:02.360527+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:02 vm09 bash[22981]: audit 2026-03-09T18:13:02.360527+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: cluster 2026-03-09T18:13:01.133882+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:02.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: cluster 2026-03-09T18:13:01.133882+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: cephadm 2026-03-09T18:13:01.317426+0000 mgr.a (mgr.14150) 89 : cephadm [INF] Deploying daemon osd.1 on vm09 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: cephadm 2026-03-09T18:13:01.317426+0000 mgr.a (mgr.14150) 89 : cephadm [INF] Deploying daemon osd.1 on vm09 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: audit 2026-03-09T18:13:02.350521+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: audit 2026-03-09T18:13:02.350521+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: audit 2026-03-09T18:13:02.356081+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: audit 2026-03-09T18:13:02.356081+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: audit 2026-03-09T18:13:02.360527+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:02.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:02 vm03 bash[20762]: audit 2026-03-09T18:13:02.360527+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:04.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:04 vm03 bash[20762]: cluster 2026-03-09T18:13:03.134155+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:04.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:04 vm03 bash[20762]: cluster 2026-03-09T18:13:03.134155+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:04.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:04 vm09 bash[22981]: cluster 2026-03-09T18:13:03.134155+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:04.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:04 vm09 bash[22981]: cluster 2026-03-09T18:13:03.134155+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:06.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:06 vm03 bash[20762]: cluster 2026-03-09T18:13:05.134383+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:06.822 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:06 vm03 bash[20762]: cluster 2026-03-09T18:13:05.134383+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:06.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:06 vm03 bash[20762]: audit 2026-03-09T18:13:06.104788+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:06 vm03 bash[20762]: audit 2026-03-09T18:13:06.104788+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:06 vm03 bash[20762]: audit 2026-03-09T18:13:06.104813+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:06 vm03 bash[20762]: audit 2026-03-09T18:13:06.104813+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:06 vm09 bash[22981]: cluster 2026-03-09T18:13:05.134383+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:06 vm09 bash[22981]: cluster 2026-03-09T18:13:05.134383+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:06 vm09 bash[22981]: audit 2026-03-09T18:13:06.104788+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:06 vm09 bash[22981]: audit 2026-03-09T18:13:06.104788+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:06 vm09 bash[22981]: audit 2026-03-09T18:13:06.104813+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:06 vm09 bash[22981]: audit 2026-03-09T18:13:06.104813+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.506595+0000 mon.a (mon.0) 280 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.506595+0000 mon.a (mon.0) 280 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: cluster 2026-03-09T18:13:06.509312+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: cluster 2026-03-09T18:13:06.509312+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.510032+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.510032+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.510327+0000 mon.a (mon.0) 283 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.510327+0000 mon.a (mon.0) 283 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.510414+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:06.510414+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:07.509705+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: audit 2026-03-09T18:13:07.509705+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: cluster 2026-03-09T18:13:07.513063+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:13:07.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:07 vm03 bash[20762]: cluster 2026-03-09T18:13:07.513063+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.506595+0000 mon.a (mon.0) 280 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.506595+0000 mon.a (mon.0) 280 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: cluster 2026-03-09T18:13:06.509312+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: cluster 2026-03-09T18:13:06.509312+0000 mon.a (mon.0) 281 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.510032+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.510032+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.510327+0000 mon.a (mon.0) 283 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.510327+0000 mon.a (mon.0) 283 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.510414+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:06.510414+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]: dispatch 2026-03-09T18:13:07.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:07.509705+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:13:07.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: audit 2026-03-09T18:13:07.509705+0000 mon.a (mon.0) 284 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm09", "root=default"]}]': finished 2026-03-09T18:13:07.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: cluster 2026-03-09T18:13:07.513063+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:13:07.915 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:07 vm09 bash[22981]: cluster 2026-03-09T18:13:07.513063+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: cluster 2026-03-09T18:13:07.134622+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: cluster 2026-03-09T18:13:07.134622+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:07.513910+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:07.513910+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:07.518612+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:07.518612+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:08.423693+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:08.423693+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:08.520284+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.619 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:08 vm09 bash[22981]: audit 2026-03-09T18:13:08.520284+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: cluster 2026-03-09T18:13:07.134622+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: cluster 2026-03-09T18:13:07.134622+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:07.513910+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:07.513910+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:07.518612+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:07.518612+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:08.423693+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:08.423693+0000 mon.a (mon.0) 288 : audit [INF] from='osd.1 ' entity='osd.1' 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:08.520284+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:08.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:08 vm03 bash[20762]: audit 2026-03-09T18:13:08.520284+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:09.577 INFO:teuthology.orchestra.run.vm09.stdout:Created osd(s) 1 on host 'vm09' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:07.058627+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:07.058627+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:07.058683+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:07.058683+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.579112+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.579112+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.583284+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.583284+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.958338+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.958338+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.958951+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.958951+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.963224+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:08.963224+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:09.428649+0000 mon.a (mon.0) 295 : cluster [INF] osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993] boot 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:09.428649+0000 mon.a (mon.0) 295 : cluster [INF] osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993] boot 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:09.428677+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: cluster 2026-03-09T18:13:09.428677+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.428742+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.428742+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.566828+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.566828+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.571225+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.571225+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.575442+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:09 vm09 bash[22981]: audit 2026-03-09T18:13:09.575442+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:09.656 DEBUG:teuthology.orchestra.run.vm09:osd.1> sudo journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@osd.1.service 2026-03-09T18:13:09.657 INFO:tasks.cephadm:Waiting for 2 OSDs to come up... 2026-03-09T18:13:09.657 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd stat -f json 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:07.058627+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:07.058627+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:07.058683+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:07.058683+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.579112+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.579112+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.583284+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.583284+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.958338+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.958338+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.958951+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.958951+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.963224+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:08.963224+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:09.428649+0000 mon.a (mon.0) 295 : cluster [INF] osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993] boot 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:09.428649+0000 mon.a (mon.0) 295 : cluster [INF] osd.1 [v2:192.168.123.109:6800/1836659993,v1:192.168.123.109:6801/1836659993] boot 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:09.428677+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: cluster 2026-03-09T18:13:09.428677+0000 mon.a (mon.0) 296 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.428742+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.428742+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.566828+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.566828+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.571225+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.571225+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.575442+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:10.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:09 vm03 bash[20762]: audit 2026-03-09T18:13:09.575442+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:11.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:10 vm03 bash[20762]: cluster 2026-03-09T18:13:09.134905+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:11.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:10 vm03 bash[20762]: cluster 2026-03-09T18:13:09.134905+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:11.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:10 vm09 bash[22981]: cluster 2026-03-09T18:13:09.134905+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:11.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:10 vm09 bash[22981]: cluster 2026-03-09T18:13:09.134905+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T18:13:12.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:11 vm03 bash[20762]: cluster 2026-03-09T18:13:10.976427+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:13:12.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:11 vm03 bash[20762]: cluster 2026-03-09T18:13:10.976427+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:13:12.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:11 vm09 bash[22981]: cluster 2026-03-09T18:13:10.976427+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:13:12.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:11 vm09 bash[22981]: cluster 2026-03-09T18:13:10.976427+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T18:13:13.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:12 vm03 bash[20762]: cluster 2026-03-09T18:13:11.135156+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:13.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:12 vm03 bash[20762]: cluster 2026-03-09T18:13:11.135156+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:13.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:12 vm09 bash[22981]: cluster 2026-03-09T18:13:11.135156+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:13.414 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:12 vm09 bash[22981]: cluster 2026-03-09T18:13:11.135156+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:14.274 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:14.539 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:14.590 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1773079989,"num_in_osds":2,"osd_in_since":1773079972,"num_remapped_pgs":0} 2026-03-09T18:13:14.591 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd dump --format=json 2026-03-09T18:13:14.993 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:14 vm03 bash[20762]: cluster 2026-03-09T18:13:13.135724+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:14.993 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:14 vm03 bash[20762]: cluster 2026-03-09T18:13:13.135724+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:14.993 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:14 vm03 bash[20762]: audit 2026-03-09T18:13:14.538904+0000 mon.a (mon.0) 302 : audit [DBG] from='client.? 192.168.123.103:0/3556741584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:13:14.993 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:14 vm03 bash[20762]: audit 2026-03-09T18:13:14.538904+0000 mon.a (mon.0) 302 : audit [DBG] from='client.? 192.168.123.103:0/3556741584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:13:15.374 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:14 vm09 bash[22981]: cluster 2026-03-09T18:13:13.135724+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:15.374 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:14 vm09 bash[22981]: cluster 2026-03-09T18:13:13.135724+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:15.374 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:14 vm09 bash[22981]: audit 2026-03-09T18:13:14.538904+0000 mon.a (mon.0) 302 : audit [DBG] from='client.? 192.168.123.103:0/3556741584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:13:15.374 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:14 vm09 bash[22981]: audit 2026-03-09T18:13:14.538904+0000 mon.a (mon.0) 302 : audit [DBG] from='client.? 192.168.123.103:0/3556741584' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cluster 2026-03-09T18:13:15.135981+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cluster 2026-03-09T18:13:15.135981+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cephadm 2026-03-09T18:13:15.163093+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cephadm 2026-03-09T18:13:15.163093+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.168089+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.168089+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.171673+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.171673+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.172222+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.172222+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cephadm 2026-03-09T18:13:15.172546+0000 mgr.a (mgr.14150) 98 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cephadm 2026-03-09T18:13:15.172546+0000 mgr.a (mgr.14150) 98 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cephadm 2026-03-09T18:13:15.172942+0000 mgr.a (mgr.14150) 99 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: cephadm 2026-03-09T18:13:15.172942+0000 mgr.a (mgr.14150) 99 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.173202+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.173202+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.173592+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.173592+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.177450+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:16 vm03 bash[20762]: audit 2026-03-09T18:13:15.177450+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cluster 2026-03-09T18:13:15.135981+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:16.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cluster 2026-03-09T18:13:15.135981+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:16.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cephadm 2026-03-09T18:13:15.163093+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T18:13:16.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cephadm 2026-03-09T18:13:15.163093+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Detected new or changed devices on vm09 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.168089+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.168089+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.171673+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.171673+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.172222+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.172222+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cephadm 2026-03-09T18:13:15.172546+0000 mgr.a (mgr.14150) 98 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cephadm 2026-03-09T18:13:15.172546+0000 mgr.a (mgr.14150) 98 : cephadm [INF] Adjusting osd_memory_target on vm09 to 455.7M 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cephadm 2026-03-09T18:13:15.172942+0000 mgr.a (mgr.14150) 99 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: cephadm 2026-03-09T18:13:15.172942+0000 mgr.a (mgr.14150) 99 : cephadm [WRN] Unable to set osd_memory_target on vm09 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.173202+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.173202+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.173592+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.173592+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.177450+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:16.665 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:16 vm09 bash[22981]: audit 2026-03-09T18:13:15.177450+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.103:0/1129300204' entity='mgr.a' 2026-03-09T18:13:18.285 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:18.533 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:18.533 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"fsid":"24200844-1be3-11f1-b4ce-2b35a0bfc236","created":"2026-03-09T18:10:53.889909+0000","modified":"2026-03-09T18:13:10.965628+0000","last_up_change":"2026-03-09T18:13:09.422288+0000","last_in_change":"2026-03-09T18:12:52.299483+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"6dcc3e3a-5726-4fc0-b79f-03da6ded5591","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6803","nonce":3690867692}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6805","nonce":3690867692}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6809","nonce":3690867692}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6807","nonce":3690867692}]},"public_addr":"192.168.123.103:6803/3690867692","cluster_addr":"192.168.123.103:6805/3690867692","heartbeat_back_addr":"192.168.123.103:6809/3690867692","heartbeat_front_addr":"192.168.123.103:6807/3690867692","state":["exists","up"]},{"osd":1,"uuid":"73ff702d-91cb-4376-b927-a763bfb3015c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6801","nonce":1836659993}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6803","nonce":1836659993}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6807","nonce":1836659993}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6805","nonce":1836659993}]},"public_addr":"192.168.123.109:6801/1836659993","cluster_addr":"192.168.123.109:6803/1836659993","heartbeat_back_addr":"192.168.123.109:6807/1836659993","heartbeat_front_addr":"192.168.123.109:6805/1836659993","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:12:34.367987+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:13:07.058685+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:6801/2195634331":"2026-03-10T18:11:15.115209+0000","192.168.123.103:6800/2195634331":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/695873394":"2026-03-10T18:11:15.115209+0000","192.168.123.103:6801/288264179":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/1126038190":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/1685817008":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/1769207661":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/1623513491":"2026-03-10T18:11:04.504358+0000","192.168.123.103:6800/288264179":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/3379036283":"2026-03-10T18:11:04.504358+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:13:18.545 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:18 vm03 bash[20762]: cluster 2026-03-09T18:13:17.136191+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:18.545 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:18 vm03 bash[20762]: cluster 2026-03-09T18:13:17.136191+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:18.597 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-09T18:13:18.597 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T18:13:18.597 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T18:13:18.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:18 vm09 bash[22981]: cluster 2026-03-09T18:13:17.136191+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:18.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:18 vm09 bash[22981]: cluster 2026-03-09T18:13:17.136191+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:19.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:19 vm03 bash[20762]: audit 2026-03-09T18:13:18.533273+0000 mon.a (mon.0) 309 : audit [DBG] from='client.? 192.168.123.103:0/1381330217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:19.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:19 vm03 bash[20762]: audit 2026-03-09T18:13:18.533273+0000 mon.a (mon.0) 309 : audit [DBG] from='client.? 192.168.123.103:0/1381330217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:19.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:19 vm09 bash[22981]: audit 2026-03-09T18:13:18.533273+0000 mon.a (mon.0) 309 : audit [DBG] from='client.? 192.168.123.103:0/1381330217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:19.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:19 vm09 bash[22981]: audit 2026-03-09T18:13:18.533273+0000 mon.a (mon.0) 309 : audit [DBG] from='client.? 192.168.123.103:0/1381330217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:20.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:20 vm03 bash[20762]: cluster 2026-03-09T18:13:19.136447+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:20.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:20 vm03 bash[20762]: cluster 2026-03-09T18:13:19.136447+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:20 vm09 bash[22981]: cluster 2026-03-09T18:13:19.136447+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:20.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:20 vm09 bash[22981]: cluster 2026-03-09T18:13:19.136447+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:22.296 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:22.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:22 vm03 bash[20762]: cluster 2026-03-09T18:13:21.136863+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:22.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:22 vm03 bash[20762]: cluster 2026-03-09T18:13:21.136863+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:22.606 INFO:teuthology.orchestra.run.vm03.stdout:[client.0] 2026-03-09T18:13:22.606 INFO:teuthology.orchestra.run.vm03.stdout: key = AQDCDa9pcmjrIxAAJSkgVd+YYinnl2Z2g7NcCQ== 2026-03-09T18:13:22.663 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T18:13:22.663 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-09T18:13:22.663 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-09T18:13:22.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:22 vm09 bash[22981]: cluster 2026-03-09T18:13:21.136863+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:22.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:22 vm09 bash[22981]: cluster 2026-03-09T18:13:21.136863+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:22.676 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-09T18:13:23.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:23 vm03 bash[20762]: audit 2026-03-09T18:13:22.602514+0000 mon.a (mon.0) 310 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:23.572 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:23 vm03 bash[20762]: audit 2026-03-09T18:13:22.602514+0000 mon.a (mon.0) 310 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:23.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:23 vm03 bash[20762]: audit 2026-03-09T18:13:22.605062+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:23.573 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:23 vm03 bash[20762]: audit 2026-03-09T18:13:22.605062+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:23.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:23 vm09 bash[22981]: audit 2026-03-09T18:13:22.602514+0000 mon.a (mon.0) 310 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:23.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:23 vm09 bash[22981]: audit 2026-03-09T18:13:22.602514+0000 mon.a (mon.0) 310 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:23.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:23 vm09 bash[22981]: audit 2026-03-09T18:13:22.605062+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:23.664 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:23 vm09 bash[22981]: audit 2026-03-09T18:13:22.605062+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.103:0/4195511088' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:24.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:24 vm09 bash[22981]: cluster 2026-03-09T18:13:23.137151+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:24.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:24 vm09 bash[22981]: cluster 2026-03-09T18:13:23.137151+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:25.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:24 vm03 bash[20762]: cluster 2026-03-09T18:13:23.137151+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:25.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:24 vm03 bash[20762]: cluster 2026-03-09T18:13:23.137151+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:26.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:26 vm09 bash[22981]: cluster 2026-03-09T18:13:25.137451+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:26.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:26 vm09 bash[22981]: cluster 2026-03-09T18:13:25.137451+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:27.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:26 vm03 bash[20762]: cluster 2026-03-09T18:13:25.137451+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:27.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:26 vm03 bash[20762]: cluster 2026-03-09T18:13:25.137451+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:27.300 INFO:teuthology.orchestra.run.vm09.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.b/config 2026-03-09T18:13:27.631 INFO:teuthology.orchestra.run.vm09.stdout:[client.1] 2026-03-09T18:13:27.631 INFO:teuthology.orchestra.run.vm09.stdout: key = AQDHDa9ppZxzJRAARBFq6muVUN/e4YcFfhhIJA== 2026-03-09T18:13:27.689 DEBUG:teuthology.orchestra.run.vm09:> set -ex 2026-03-09T18:13:27.689 DEBUG:teuthology.orchestra.run.vm09:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-09T18:13:27.689 DEBUG:teuthology.orchestra.run.vm09:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-09T18:13:27.701 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T18:13:27.701 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T18:13:27.701 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph mgr dump --format=json 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: cluster 2026-03-09T18:13:27.137737+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: cluster 2026-03-09T18:13:27.137737+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: audit 2026-03-09T18:13:27.628240+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: audit 2026-03-09T18:13:27.628240+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: audit 2026-03-09T18:13:27.628354+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.109:0/1576909741' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: audit 2026-03-09T18:13:27.628354+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.109:0/1576909741' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: audit 2026-03-09T18:13:27.630084+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:28.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:28 vm09 bash[22981]: audit 2026-03-09T18:13:27.630084+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: cluster 2026-03-09T18:13:27.137737+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: cluster 2026-03-09T18:13:27.137737+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: audit 2026-03-09T18:13:27.628240+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: audit 2026-03-09T18:13:27.628240+0000 mon.a (mon.0) 312 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: audit 2026-03-09T18:13:27.628354+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.109:0/1576909741' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: audit 2026-03-09T18:13:27.628354+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.109:0/1576909741' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: audit 2026-03-09T18:13:27.630084+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:29.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:28 vm03 bash[20762]: audit 2026-03-09T18:13:27.630084+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-09T18:13:30.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:30 vm09 bash[22981]: cluster 2026-03-09T18:13:29.138054+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:30.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:30 vm09 bash[22981]: cluster 2026-03-09T18:13:29.138054+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:31.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:30 vm03 bash[20762]: cluster 2026-03-09T18:13:29.138054+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:31.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:30 vm03 bash[20762]: cluster 2026-03-09T18:13:29.138054+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:32.325 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:32.607 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:32.670 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":13,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":3967835665},{"type":"v1","addr":"192.168.123.103:6801","nonce":3967835665}]},"active_addr":"192.168.123.103:6801/3967835665","active_change":"2026-03-09T18:11:15.115314+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24107,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.103:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":608653199}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":778591312}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":1210828933}]}]} 2026-03-09T18:13:32.672 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T18:13:32.672 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T18:13:32.672 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd dump --format=json 2026-03-09T18:13:32.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:32 vm09 bash[22981]: cluster 2026-03-09T18:13:31.138282+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:32.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:32 vm09 bash[22981]: cluster 2026-03-09T18:13:31.138282+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:32.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:32 vm09 bash[22981]: audit 2026-03-09T18:13:32.605691+0000 mon.a (mon.0) 314 : audit [DBG] from='client.? 192.168.123.103:0/3789258194' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:13:32.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:32 vm09 bash[22981]: audit 2026-03-09T18:13:32.605691+0000 mon.a (mon.0) 314 : audit [DBG] from='client.? 192.168.123.103:0/3789258194' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:13:33.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:32 vm03 bash[20762]: cluster 2026-03-09T18:13:31.138282+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:33.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:32 vm03 bash[20762]: cluster 2026-03-09T18:13:31.138282+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:33.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:32 vm03 bash[20762]: audit 2026-03-09T18:13:32.605691+0000 mon.a (mon.0) 314 : audit [DBG] from='client.? 192.168.123.103:0/3789258194' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:13:33.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:32 vm03 bash[20762]: audit 2026-03-09T18:13:32.605691+0000 mon.a (mon.0) 314 : audit [DBG] from='client.? 192.168.123.103:0/3789258194' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T18:13:35.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:34 vm03 bash[20762]: cluster 2026-03-09T18:13:33.138580+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:35.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:34 vm03 bash[20762]: cluster 2026-03-09T18:13:33.138580+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:35.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:34 vm09 bash[22981]: cluster 2026-03-09T18:13:33.138580+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:35.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:34 vm09 bash[22981]: cluster 2026-03-09T18:13:33.138580+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:36.334 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:36.577 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:36.577 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"fsid":"24200844-1be3-11f1-b4ce-2b35a0bfc236","created":"2026-03-09T18:10:53.889909+0000","modified":"2026-03-09T18:13:10.965628+0000","last_up_change":"2026-03-09T18:13:09.422288+0000","last_in_change":"2026-03-09T18:12:52.299483+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"6dcc3e3a-5726-4fc0-b79f-03da6ded5591","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6803","nonce":3690867692}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6805","nonce":3690867692}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6809","nonce":3690867692}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6807","nonce":3690867692}]},"public_addr":"192.168.123.103:6803/3690867692","cluster_addr":"192.168.123.103:6805/3690867692","heartbeat_back_addr":"192.168.123.103:6809/3690867692","heartbeat_front_addr":"192.168.123.103:6807/3690867692","state":["exists","up"]},{"osd":1,"uuid":"73ff702d-91cb-4376-b927-a763bfb3015c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6801","nonce":1836659993}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6803","nonce":1836659993}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6807","nonce":1836659993}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6805","nonce":1836659993}]},"public_addr":"192.168.123.109:6801/1836659993","cluster_addr":"192.168.123.109:6803/1836659993","heartbeat_back_addr":"192.168.123.109:6807/1836659993","heartbeat_front_addr":"192.168.123.109:6805/1836659993","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:12:34.367987+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:13:07.058685+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:6801/2195634331":"2026-03-10T18:11:15.115209+0000","192.168.123.103:6800/2195634331":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/695873394":"2026-03-10T18:11:15.115209+0000","192.168.123.103:6801/288264179":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/1126038190":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/1685817008":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/1769207661":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/1623513491":"2026-03-10T18:11:04.504358+0000","192.168.123.103:6800/288264179":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/3379036283":"2026-03-10T18:11:04.504358+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:13:36.632 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T18:13:36.632 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd dump --format=json 2026-03-09T18:13:37.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:36 vm03 bash[20762]: cluster 2026-03-09T18:13:35.138868+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:37.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:36 vm03 bash[20762]: cluster 2026-03-09T18:13:35.138868+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:37.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:36 vm03 bash[20762]: audit 2026-03-09T18:13:36.577729+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.103:0/4202008080' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:37.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:36 vm03 bash[20762]: audit 2026-03-09T18:13:36.577729+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.103:0/4202008080' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:37.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:36 vm09 bash[22981]: cluster 2026-03-09T18:13:35.138868+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:37.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:36 vm09 bash[22981]: cluster 2026-03-09T18:13:35.138868+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:37.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:36 vm09 bash[22981]: audit 2026-03-09T18:13:36.577729+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.103:0/4202008080' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:37.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:36 vm09 bash[22981]: audit 2026-03-09T18:13:36.577729+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.103:0/4202008080' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:39.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:38 vm03 bash[20762]: cluster 2026-03-09T18:13:37.139220+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:39.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:38 vm03 bash[20762]: cluster 2026-03-09T18:13:37.139220+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:39.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:38 vm09 bash[22981]: cluster 2026-03-09T18:13:37.139220+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:39.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:38 vm09 bash[22981]: cluster 2026-03-09T18:13:37.139220+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:40.349 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:40.618 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:40.618 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"fsid":"24200844-1be3-11f1-b4ce-2b35a0bfc236","created":"2026-03-09T18:10:53.889909+0000","modified":"2026-03-09T18:13:10.965628+0000","last_up_change":"2026-03-09T18:13:09.422288+0000","last_in_change":"2026-03-09T18:12:52.299483+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"6dcc3e3a-5726-4fc0-b79f-03da6ded5591","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6803","nonce":3690867692}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6805","nonce":3690867692}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6809","nonce":3690867692}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":3690867692},{"type":"v1","addr":"192.168.123.103:6807","nonce":3690867692}]},"public_addr":"192.168.123.103:6803/3690867692","cluster_addr":"192.168.123.103:6805/3690867692","heartbeat_back_addr":"192.168.123.103:6809/3690867692","heartbeat_front_addr":"192.168.123.103:6807/3690867692","state":["exists","up"]},{"osd":1,"uuid":"73ff702d-91cb-4376-b927-a763bfb3015c","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6800","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6801","nonce":1836659993}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6802","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6803","nonce":1836659993}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6806","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6807","nonce":1836659993}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.109:6804","nonce":1836659993},{"type":"v1","addr":"192.168.123.109:6805","nonce":1836659993}]},"public_addr":"192.168.123.109:6801/1836659993","cluster_addr":"192.168.123.109:6803/1836659993","heartbeat_back_addr":"192.168.123.109:6807/1836659993","heartbeat_front_addr":"192.168.123.109:6805/1836659993","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:12:34.367987+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T18:13:07.058685+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:6801/2195634331":"2026-03-10T18:11:15.115209+0000","192.168.123.103:6800/2195634331":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/695873394":"2026-03-10T18:11:15.115209+0000","192.168.123.103:6801/288264179":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/1126038190":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/1685817008":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/1769207661":"2026-03-10T18:11:15.115209+0000","192.168.123.103:0/1623513491":"2026-03-10T18:11:04.504358+0000","192.168.123.103:6800/288264179":"2026-03-10T18:11:04.504358+0000","192.168.123.103:0/3379036283":"2026-03-10T18:11:04.504358+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T18:13:40.677 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph tell osd.0 flush_pg_stats 2026-03-09T18:13:40.677 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph tell osd.1 flush_pg_stats 2026-03-09T18:13:41.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:40 vm03 bash[20762]: cluster 2026-03-09T18:13:39.139463+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:41.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:40 vm03 bash[20762]: cluster 2026-03-09T18:13:39.139463+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:40 vm03 bash[20762]: audit 2026-03-09T18:13:40.618472+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.103:0/131409217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:41.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:40 vm03 bash[20762]: audit 2026-03-09T18:13:40.618472+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.103:0/131409217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:41.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:40 vm09 bash[22981]: cluster 2026-03-09T18:13:39.139463+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:41.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:40 vm09 bash[22981]: cluster 2026-03-09T18:13:39.139463+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:41.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:40 vm09 bash[22981]: audit 2026-03-09T18:13:40.618472+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.103:0/131409217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:41.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:40 vm09 bash[22981]: audit 2026-03-09T18:13:40.618472+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.103:0/131409217' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T18:13:43.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:42 vm03 bash[20762]: cluster 2026-03-09T18:13:41.139702+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:43.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:42 vm03 bash[20762]: cluster 2026-03-09T18:13:41.139702+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:43.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:42 vm09 bash[22981]: cluster 2026-03-09T18:13:41.139702+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:43.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:42 vm09 bash[22981]: cluster 2026-03-09T18:13:41.139702+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:44.361 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:44.363 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:44.682 INFO:teuthology.orchestra.run.vm03.stdout:34359738383 2026-03-09T18:13:44.682 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd last-stat-seq osd.0 2026-03-09T18:13:44.715 INFO:teuthology.orchestra.run.vm03.stdout:55834574859 2026-03-09T18:13:44.715 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph osd last-stat-seq osd.1 2026-03-09T18:13:45.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:44 vm03 bash[20762]: cluster 2026-03-09T18:13:43.140029+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:45.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:44 vm03 bash[20762]: cluster 2026-03-09T18:13:43.140029+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:45.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:44 vm09 bash[22981]: cluster 2026-03-09T18:13:43.140029+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:45.165 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:44 vm09 bash[22981]: cluster 2026-03-09T18:13:43.140029+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:47.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:46 vm03 bash[20762]: cluster 2026-03-09T18:13:45.140364+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:47.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:46 vm03 bash[20762]: cluster 2026-03-09T18:13:45.140364+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:47.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:46 vm09 bash[22981]: cluster 2026-03-09T18:13:45.140364+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:47.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:46 vm09 bash[22981]: cluster 2026-03-09T18:13:45.140364+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:48.372 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:48.625 INFO:teuthology.orchestra.run.vm03.stdout:34359738383 2026-03-09T18:13:48.678 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738383 got 34359738383 for osd.0 2026-03-09T18:13:48.678 DEBUG:teuthology.parallel:result is None 2026-03-09T18:13:49.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:48 vm03 bash[20762]: cluster 2026-03-09T18:13:47.140601+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:49.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:48 vm03 bash[20762]: cluster 2026-03-09T18:13:47.140601+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:49.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:48 vm03 bash[20762]: audit 2026-03-09T18:13:48.625607+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.103:0/2517952316' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:13:49.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:48 vm03 bash[20762]: audit 2026-03-09T18:13:48.625607+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.103:0/2517952316' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:13:49.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:48 vm09 bash[22981]: cluster 2026-03-09T18:13:47.140601+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:49.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:48 vm09 bash[22981]: cluster 2026-03-09T18:13:47.140601+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:49.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:48 vm09 bash[22981]: audit 2026-03-09T18:13:48.625607+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.103:0/2517952316' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:13:49.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:48 vm09 bash[22981]: audit 2026-03-09T18:13:48.625607+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.103:0/2517952316' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T18:13:49.375 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:49.623 INFO:teuthology.orchestra.run.vm03.stdout:55834574859 2026-03-09T18:13:49.679 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574859 got 55834574859 for osd.1 2026-03-09T18:13:49.679 DEBUG:teuthology.parallel:result is None 2026-03-09T18:13:49.679 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T18:13:49.679 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph pg dump --format=json 2026-03-09T18:13:50.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:49 vm03 bash[20762]: audit 2026-03-09T18:13:49.623068+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.103:0/3508240706' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:13:50.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:49 vm03 bash[20762]: audit 2026-03-09T18:13:49.623068+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.103:0/3508240706' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:13:50.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:49 vm09 bash[22981]: audit 2026-03-09T18:13:49.623068+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.103:0/3508240706' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:13:50.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:49 vm09 bash[22981]: audit 2026-03-09T18:13:49.623068+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.103:0/3508240706' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T18:13:51.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:50 vm03 bash[20762]: cluster 2026-03-09T18:13:49.140820+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:51.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:50 vm03 bash[20762]: cluster 2026-03-09T18:13:49.140820+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:51.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:50 vm09 bash[22981]: cluster 2026-03-09T18:13:49.140820+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:51.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:50 vm09 bash[22981]: cluster 2026-03-09T18:13:49.140820+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:53.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:52 vm03 bash[20762]: cluster 2026-03-09T18:13:51.141120+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:53.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:52 vm03 bash[20762]: cluster 2026-03-09T18:13:51.141120+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:53.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:52 vm09 bash[22981]: cluster 2026-03-09T18:13:51.141120+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:53.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:52 vm09 bash[22981]: cluster 2026-03-09T18:13:51.141120+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:53.387 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:53.640 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:53.640 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T18:13:53.690 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":83,"stamp":"2026-03-09T18:13:53.141256+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":2,"num_per_pool_osds":2,"num_per_pool_omap_osds":0,"kb":41934848,"kb_used":463528,"kb_used_data":240,"kb_used_omap":3,"kb_used_meta":53628,"kb_avail":41471320,"statfs":{"total":42941284352,"available":42466631680,"internally_reserved":0,"allocated":245760,"data_stored":60148,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":3180,"internal_metadata":54915988},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":1,"up_from":13,"seq":55834574860,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436560,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530864,"statfs":{"total":21470642176,"available":21023604736,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738385,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26968,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940456,"statfs":{"total":21470642176,"available":21443026944,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-09T18:13:53.690 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph pg dump --format=json 2026-03-09T18:13:55.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:54 vm03 bash[20762]: cluster 2026-03-09T18:13:53.141343+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:55.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:54 vm03 bash[20762]: cluster 2026-03-09T18:13:53.141343+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:55.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:54 vm03 bash[20762]: audit 2026-03-09T18:13:53.639861+0000 mgr.a (mgr.14150) 119 : audit [DBG] from='client.14270 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:55.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:54 vm03 bash[20762]: audit 2026-03-09T18:13:53.639861+0000 mgr.a (mgr.14150) 119 : audit [DBG] from='client.14270 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:55.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:54 vm09 bash[22981]: cluster 2026-03-09T18:13:53.141343+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:55.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:54 vm09 bash[22981]: cluster 2026-03-09T18:13:53.141343+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:55.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:54 vm09 bash[22981]: audit 2026-03-09T18:13:53.639861+0000 mgr.a (mgr.14150) 119 : audit [DBG] from='client.14270 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:55.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:54 vm09 bash[22981]: audit 2026-03-09T18:13:53.639861+0000 mgr.a (mgr.14150) 119 : audit [DBG] from='client.14270 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:57.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:56 vm03 bash[20762]: cluster 2026-03-09T18:13:55.141537+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:57.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:56 vm03 bash[20762]: cluster 2026-03-09T18:13:55.141537+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:57.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:56 vm09 bash[22981]: cluster 2026-03-09T18:13:55.141537+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:57.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:56 vm09 bash[22981]: cluster 2026-03-09T18:13:55.141537+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:57.399 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:13:57.661 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:13:57.661 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T18:13:57.714 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":85,"stamp":"2026-03-09T18:13:57.141710+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":2,"num_per_pool_osds":2,"num_per_pool_omap_osds":0,"kb":41934848,"kb_used":463528,"kb_used_data":240,"kb_used_omap":3,"kb_used_meta":53628,"kb_avail":41471320,"statfs":{"total":42941284352,"available":42466631680,"internally_reserved":0,"allocated":245760,"data_stored":60148,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":3180,"internal_metadata":54915988},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":1,"up_from":13,"seq":55834574861,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436560,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530864,"statfs":{"total":21470642176,"available":21023604736,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738385,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26968,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940456,"statfs":{"total":21470642176,"available":21443026944,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-09T18:13:57.715 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T18:13:57.715 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T18:13:57.715 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T18:13:57.715 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph health --format=json 2026-03-09T18:13:59.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:58 vm03 bash[20762]: cluster 2026-03-09T18:13:57.141800+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:59.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:58 vm03 bash[20762]: cluster 2026-03-09T18:13:57.141800+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:59.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:58 vm03 bash[20762]: audit 2026-03-09T18:13:57.660711+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:59.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:13:58 vm03 bash[20762]: audit 2026-03-09T18:13:57.660711+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:59.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:58 vm09 bash[22981]: cluster 2026-03-09T18:13:57.141800+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:59.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:58 vm09 bash[22981]: cluster 2026-03-09T18:13:57.141800+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:13:59.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:58 vm09 bash[22981]: audit 2026-03-09T18:13:57.660711+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:13:59.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:13:58 vm09 bash[22981]: audit 2026-03-09T18:13:57.660711+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:14:01.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:00 vm03 bash[20762]: cluster 2026-03-09T18:13:59.142074+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:01.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:00 vm03 bash[20762]: cluster 2026-03-09T18:13:59.142074+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:01.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:00 vm09 bash[22981]: cluster 2026-03-09T18:13:59.142074+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:01.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:00 vm09 bash[22981]: cluster 2026-03-09T18:13:59.142074+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:01.414 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:14:01.697 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T18:14:01.697 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T18:14:01.762 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T18:14:01.762 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T18:14:01.762 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T18:14:01.765 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm03.local 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- bash -c 'set -ex 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> HOSTNAMES=$(ceph orch host ls --format json | jq -r '"'"'.[] | .hostname'"'"') 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> for host in $HOSTNAMES; do 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> # do a check-host on each host to make sure it'"'"'s reachable 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> ceph cephadm check-host ${host} 2> ${host}-ok.txt 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> HOST_OK=$(cat ${host}-ok.txt) 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> if ! grep -q "Host looks OK" <<< "$HOST_OK"; then 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> printf "Failed host check:\n\n$HOST_OK" 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> exit 1 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> fi 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> done 2026-03-09T18:14:01.765 DEBUG:teuthology.orchestra.run.vm03:> ' 2026-03-09T18:14:02.072 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:01 vm03 bash[20762]: audit 2026-03-09T18:14:01.697383+0000 mon.a (mon.0) 319 : audit [DBG] from='client.? 192.168.123.103:0/1789569915' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:14:02.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:01 vm03 bash[20762]: audit 2026-03-09T18:14:01.697383+0000 mon.a (mon.0) 319 : audit [DBG] from='client.? 192.168.123.103:0/1789569915' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:14:02.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:01 vm09 bash[22981]: audit 2026-03-09T18:14:01.697383+0000 mon.a (mon.0) 319 : audit [DBG] from='client.? 192.168.123.103:0/1789569915' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:14:02.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:01 vm09 bash[22981]: audit 2026-03-09T18:14:01.697383+0000 mon.a (mon.0) 319 : audit [DBG] from='client.? 192.168.123.103:0/1789569915' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T18:14:03.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:02 vm09 bash[22981]: cluster 2026-03-09T18:14:01.142318+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:03.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:02 vm09 bash[22981]: cluster 2026-03-09T18:14:01.142318+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:03.322 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:02 vm03 bash[20762]: cluster 2026-03-09T18:14:01.142318+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:03.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:02 vm03 bash[20762]: cluster 2026-03-09T18:14:01.142318+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:05.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:04 vm09 bash[22981]: cluster 2026-03-09T18:14:03.142521+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:05.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:04 vm09 bash[22981]: cluster 2026-03-09T18:14:03.142521+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:04 vm03 bash[20762]: cluster 2026-03-09T18:14:03.142521+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:05.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:04 vm03 bash[20762]: cluster 2026-03-09T18:14:03.142521+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:05.424 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:14:05.534 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r '.[] | .hostname' 2026-03-09T18:14:05.534 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch host ls --format json 2026-03-09T18:14:05.697 INFO:teuthology.orchestra.run.vm03.stderr:+ HOSTNAMES='vm03 2026-03-09T18:14:05.697 INFO:teuthology.orchestra.run.vm03.stderr:vm09' 2026-03-09T18:14:05.697 INFO:teuthology.orchestra.run.vm03.stderr:+ for host in $HOSTNAMES 2026-03-09T18:14:05.697 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph cephadm check-host vm03 2026-03-09T18:14:06.136 INFO:teuthology.orchestra.run.vm03.stdout:vm03 (None) ok 2026-03-09T18:14:06.146 INFO:teuthology.orchestra.run.vm03.stderr:++ cat vm03-ok.txt 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:+ HOST_OK='docker (/usr/bin/docker) is present 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:systemctl is present 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:lvcreate is present 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:Unit ntp.service is enabled and running 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:Hostname "vm03" matches what is expected. 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:Host looks OK' 2026-03-09T18:14:06.147 INFO:teuthology.orchestra.run.vm03.stderr:+ grep -q 'Host looks OK' 2026-03-09T18:14:06.148 INFO:teuthology.orchestra.run.vm03.stderr:+ for host in $HOSTNAMES 2026-03-09T18:14:06.148 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph cephadm check-host vm09 2026-03-09T18:14:06.581 INFO:teuthology.orchestra.run.vm03.stdout:vm09 (None) ok 2026-03-09T18:14:06.591 INFO:teuthology.orchestra.run.vm03.stderr:++ cat vm09-ok.txt 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:+ HOST_OK='docker (/usr/bin/docker) is present 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:systemctl is present 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:lvcreate is present 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:Unit ntp.service is enabled and running 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:Hostname "vm09" matches what is expected. 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:Host looks OK' 2026-03-09T18:14:06.592 INFO:teuthology.orchestra.run.vm03.stderr:+ grep -q 'Host looks OK' 2026-03-09T18:14:06.637 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T18:14:06.642 INFO:tasks.cephadm:Teardown begin 2026-03-09T18:14:06.642 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:14:06.649 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:14:06.658 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T18:14:06.659 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 -- ceph mgr module disable cephadm 2026-03-09T18:14:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:06 vm09 bash[22981]: cluster 2026-03-09T18:14:05.142731+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:06 vm09 bash[22981]: cluster 2026-03-09T18:14:05.142731+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:06 vm09 bash[22981]: audit 2026-03-09T18:14:05.685834+0000 mgr.a (mgr.14150) 127 : audit [DBG] from='client.14282 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:14:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:06 vm09 bash[22981]: audit 2026-03-09T18:14:05.685834+0000 mgr.a (mgr.14150) 127 : audit [DBG] from='client.14282 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:14:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:06 vm09 bash[22981]: audit 2026-03-09T18:14:05.845657+0000 mgr.a (mgr.14150) 128 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:06.914 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:06 vm09 bash[22981]: audit 2026-03-09T18:14:05.845657+0000 mgr.a (mgr.14150) 128 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:07.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:06 vm03 bash[20762]: cluster 2026-03-09T18:14:05.142731+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:07.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:06 vm03 bash[20762]: cluster 2026-03-09T18:14:05.142731+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:07.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:06 vm03 bash[20762]: audit 2026-03-09T18:14:05.685834+0000 mgr.a (mgr.14150) 127 : audit [DBG] from='client.14282 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:14:07.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:06 vm03 bash[20762]: audit 2026-03-09T18:14:05.685834+0000 mgr.a (mgr.14150) 127 : audit [DBG] from='client.14282 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T18:14:07.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:06 vm03 bash[20762]: audit 2026-03-09T18:14:05.845657+0000 mgr.a (mgr.14150) 128 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:07.073 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:06 vm03 bash[20762]: audit 2026-03-09T18:14:05.845657+0000 mgr.a (mgr.14150) 128 : audit [DBG] from='client.14286 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:08.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:07 vm09 bash[22981]: audit 2026-03-09T18:14:06.293976+0000 mgr.a (mgr.14150) 129 : audit [DBG] from='client.14290 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:08.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:07 vm09 bash[22981]: audit 2026-03-09T18:14:06.293976+0000 mgr.a (mgr.14150) 129 : audit [DBG] from='client.14290 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:07 vm03 bash[20762]: audit 2026-03-09T18:14:06.293976+0000 mgr.a (mgr.14150) 129 : audit [DBG] from='client.14290 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:08.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:07 vm03 bash[20762]: audit 2026-03-09T18:14:06.293976+0000 mgr.a (mgr.14150) 129 : audit [DBG] from='client.14290 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm09", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T18:14:09.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:08 vm09 bash[22981]: cluster 2026-03-09T18:14:07.142987+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:09.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:08 vm09 bash[22981]: cluster 2026-03-09T18:14:07.142987+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:09.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:08 vm03 bash[20762]: cluster 2026-03-09T18:14:07.142987+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:09.323 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:08 vm03 bash[20762]: cluster 2026-03-09T18:14:07.142987+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:11.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:10 vm09 bash[22981]: cluster 2026-03-09T18:14:09.143308+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:11.164 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:10 vm09 bash[22981]: cluster 2026-03-09T18:14:09.143308+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:11.305 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/mon.a/config 2026-03-09T18:14:11.319 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:10 vm03 bash[20762]: cluster 2026-03-09T18:14:09.143308+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:11.319 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:10 vm03 bash[20762]: cluster 2026-03-09T18:14:09.143308+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-09T18:14:11.494 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T18:14:11.494 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T18:14:11.495 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T18:14:11.495 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T18:14:11.495 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T18:14:11.495 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T18:14:11.495 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T18:14:11.492+0000 7f9502fe2640 -1 monclient: keyring not found 2026-03-09T18:14:11.495 INFO:teuthology.orchestra.run.vm03.stderr:[errno 21] error connecting to the cluster 2026-03-09T18:14:11.544 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:14:11.544 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T18:14:11.544 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:14:11.547 DEBUG:teuthology.orchestra.run.vm09:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T18:14:11.550 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T18:14:11.551 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T18:14:11.551 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a 2026-03-09T18:14:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:11 vm03 systemd[1]: Stopping Ceph mon.a for 24200844-1be3-11f1-b4ce-2b35a0bfc236... 2026-03-09T18:14:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:11 vm03 bash[20762]: debug 2026-03-09T18:14:11.640+0000 7f91c4d2c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:14:11.823 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 18:14:11 vm03 bash[20762]: debug 2026-03-09T18:14:11.640+0000 7f91c4d2c640 -1 mon.a@0(leader) e2 *** Got Signal Terminated *** 2026-03-09T18:14:11.954 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.a.service' 2026-03-09T18:14:11.969 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:14:11.969 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T18:14:11.969 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-09T18:14:11.969 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.b 2026-03-09T18:14:12.244 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:11 vm09 systemd[1]: Stopping Ceph mon.b for 24200844-1be3-11f1-b4ce-2b35a0bfc236... 2026-03-09T18:14:12.245 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:12 vm09 bash[22981]: debug 2026-03-09T18:14:12.033+0000 7fd8d19a2640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T18:14:12.245 INFO:journalctl@ceph.mon.b.vm09.stdout:Mar 09 18:14:12 vm09 bash[22981]: debug 2026-03-09T18:14:12.033+0000 7fd8d19a2640 -1 mon.b@1(peon) e2 *** Got Signal Terminated *** 2026-03-09T18:14:12.299 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mon.b.service' 2026-03-09T18:14:12.310 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:14:12.310 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-09T18:14:12.310 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-09T18:14:12.310 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a 2026-03-09T18:14:12.474 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.a.service' 2026-03-09T18:14:12.484 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:14:12.484 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-09T18:14:12.484 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-09T18:14:12.484 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.b 2026-03-09T18:14:12.496 INFO:journalctl@ceph.mgr.b.vm09.stdout:Mar 09 18:14:12 vm09 systemd[1]: Stopping Ceph mgr.b for 24200844-1be3-11f1-b4ce-2b35a0bfc236... 2026-03-09T18:14:12.632 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@mgr.b.service' 2026-03-09T18:14:12.685 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:14:12.685 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-09T18:14:12.685 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T18:14:12.685 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@osd.0 2026-03-09T18:14:13.073 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 18:14:12 vm03 systemd[1]: Stopping Ceph osd.0 for 24200844-1be3-11f1-b4ce-2b35a0bfc236... 2026-03-09T18:14:13.073 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 18:14:12 vm03 bash[30402]: debug 2026-03-09T18:14:12.732+0000 7f70fbc51640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:14:13.073 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 18:14:12 vm03 bash[30402]: debug 2026-03-09T18:14:12.732+0000 7f70fbc51640 -1 osd.0 14 *** Got signal Terminated *** 2026-03-09T18:14:13.073 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 18:14:12 vm03 bash[30402]: debug 2026-03-09T18:14:12.732+0000 7f70fbc51640 -1 osd.0 14 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:14:18.072 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 18:14:17 vm03 bash[36205]: ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236-osd-0 2026-03-09T18:14:18.127 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@osd.0.service' 2026-03-09T18:14:18.155 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:14:18.155 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T18:14:18.155 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T18:14:18.155 DEBUG:teuthology.orchestra.run.vm09:> sudo systemctl stop ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@osd.1 2026-03-09T18:14:18.414 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 09 18:14:18 vm09 systemd[1]: Stopping Ceph osd.1 for 24200844-1be3-11f1-b4ce-2b35a0bfc236... 2026-03-09T18:14:18.415 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 09 18:14:18 vm09 bash[26200]: debug 2026-03-09T18:14:18.205+0000 7f8c7393e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T18:14:18.415 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 09 18:14:18 vm09 bash[26200]: debug 2026-03-09T18:14:18.205+0000 7f8c7393e640 -1 osd.1 14 *** Got signal Terminated *** 2026-03-09T18:14:18.415 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 09 18:14:18 vm09 bash[26200]: debug 2026-03-09T18:14:18.205+0000 7f8c7393e640 -1 osd.1 14 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T18:14:23.551 INFO:journalctl@ceph.osd.1.vm09.stdout:Mar 09 18:14:23 vm09 bash[30274]: ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236-osd-1 2026-03-09T18:14:23.589 DEBUG:teuthology.orchestra.run.vm09:> sudo pkill -f 'journalctl -f -n 0 -u ceph-24200844-1be3-11f1-b4ce-2b35a0bfc236@osd.1.service' 2026-03-09T18:14:23.614 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T18:14:23.614 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T18:14:23.614 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 --force --keep-logs 2026-03-09T18:14:23.711 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:14:25.788 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 --force --keep-logs 2026-03-09T18:14:25.892 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:14:27.964 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:14:27.973 INFO:teuthology.orchestra.run.vm03.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T18:14:27.973 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:14:27.974 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T18:14:27.983 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T18:14:27.983 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597/remote/vm03/crash 2026-03-09T18:14:27.983 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/crash -- . 2026-03-09T18:14:28.022 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/crash: Cannot open: No such file or directory 2026-03-09T18:14:28.022 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:14:28.022 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597/remote/vm09/crash 2026-03-09T18:14:28.022 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/crash -- . 2026-03-09T18:14:28.032 INFO:teuthology.orchestra.run.vm09.stderr:tar: /var/lib/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/crash: Cannot open: No such file or directory 2026-03-09T18:14:28.032 INFO:teuthology.orchestra.run.vm09.stderr:tar: Error is not recoverable: exiting now 2026-03-09T18:14:28.033 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T18:14:28.033 DEBUG:teuthology.orchestra.run.vm03:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1 2026-03-09T18:14:28.076 INFO:tasks.cephadm:Compressing logs... 2026-03-09T18:14:28.076 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:14:28.119 DEBUG:teuthology.orchestra.run.vm09:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:14:28.126 INFO:teuthology.orchestra.run.vm09.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:14:28.127 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:14:28.127 INFO:teuthology.orchestra.run.vm03.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T18:14:28.127 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T18:14:28.127 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log 2026-03-09T18:14:28.127 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mgr.a.log 2026-03-09T18:14:28.128 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mon.b.log 2026-03-09T18:14:28.128 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log 2026-03-09T18:14:28.128 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log: 84.9% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log.gz 2026-03-09T18:14:28.128 INFO:teuthology.orchestra.run.vm09.stderr: 88.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:14:28.128 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-osd.1.log 2026-03-09T18:14:28.129 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mgr.b.log 2026-03-09T18:14:28.129 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mgr.a.log: 89.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T18:14:28.130 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mon.a.log 2026-03-09T18:14:28.130 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log: 85.2% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.log.gz 2026-03-09T18:14:28.131 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.audit.log 2026-03-09T18:14:28.134 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mon.b.log: /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.audit.log 2026-03-09T18:14:28.138 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-volume.log 2026-03-09T18:14:28.139 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mgr.b.log: 90.4% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mgr.b.log.gz 2026-03-09T18:14:28.139 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.audit.log: 89.2% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.audit.log.gz 2026-03-09T18:14:28.139 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-volume.log 2026-03-09T18:14:28.140 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.audit.log: 94.2% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-osd.1.log.gz 2026-03-09T18:14:28.140 INFO:teuthology.orchestra.run.vm09.stderr: 89.3% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.audit.log.gz 2026-03-09T18:14:28.140 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.cephadm.log 2026-03-09T18:14:28.142 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.cephadm.log 2026-03-09T18:14:28.143 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-volume.log.gz 2026-03-09T18:14:28.144 INFO:teuthology.orchestra.run.vm09.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.cephadm.log: 74.5% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.cephadm.log.gz 2026-03-09T18:14:28.147 INFO:teuthology.orchestra.run.vm09.stderr: 92.7% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mon.b.log.gz 2026-03-09T18:14:28.148 INFO:teuthology.orchestra.run.vm09.stderr: 2026-03-09T18:14:28.148 INFO:teuthology.orchestra.run.vm09.stderr:real 0m0.026s 2026-03-09T18:14:28.148 INFO:teuthology.orchestra.run.vm09.stderr:user 0m0.045s 2026-03-09T18:14:28.148 INFO:teuthology.orchestra.run.vm09.stderr:sys 0m0.000s 2026-03-09T18:14:28.150 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-volume.log: 95.9% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-volume.log.gz 2026-03-09T18:14:28.151 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-osd.0.log 2026-03-09T18:14:28.151 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.cephadm.log: 78.6% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph.cephadm.log.gz 2026-03-09T18:14:28.157 INFO:teuthology.orchestra.run.vm03.stderr: 89.7% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mgr.a.log.gz 2026-03-09T18:14:28.166 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-osd.0.log: 94.1% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-osd.0.log.gz 2026-03-09T18:14:28.200 INFO:teuthology.orchestra.run.vm03.stderr: 91.2% -- replaced with /var/log/ceph/24200844-1be3-11f1-b4ce-2b35a0bfc236/ceph-mon.a.log.gz 2026-03-09T18:14:28.202 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T18:14:28.202 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.081s 2026-03-09T18:14:28.202 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.109s 2026-03-09T18:14:28.202 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.011s 2026-03-09T18:14:28.202 INFO:tasks.cephadm:Archiving logs... 2026-03-09T18:14:28.202 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597/remote/vm03/log 2026-03-09T18:14:28.202 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:14:28.264 DEBUG:teuthology.misc:Transferring archived files from vm09:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597/remote/vm09/log 2026-03-09T18:14:28.264 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T18:14:28.275 INFO:tasks.cephadm:Removing cluster... 2026-03-09T18:14:28.275 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 --force 2026-03-09T18:14:28.402 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:14:29.651 DEBUG:teuthology.orchestra.run.vm09:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 24200844-1be3-11f1-b4ce-2b35a0bfc236 --force 2026-03-09T18:14:29.746 INFO:teuthology.orchestra.run.vm09.stdout:Deleting cluster with fsid: 24200844-1be3-11f1-b4ce-2b35a0bfc236 2026-03-09T18:14:30.966 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T18:14:30.966 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T18:14:30.970 DEBUG:teuthology.orchestra.run.vm09:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T18:14:30.974 INFO:tasks.cephadm:Teardown complete 2026-03-09T18:14:30.974 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T18:14:30.976 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T18:14:30.976 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T18:14:31.015 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T18:14:31.045 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T18:14:31.046 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T18:14:31.054 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T18:14:31.094 DEBUG:teuthology.orchestra.run.vm09:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T18:14:31.130 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:31.131 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:31.351 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:31.351 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:31.360 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:31.361 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:31.599 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:31.599 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:14:31.600 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:14:31.600 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:31.619 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:31.621 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-09T18:14:31.631 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:31.631 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:14:31.631 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T18:14:31.631 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:31.639 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:31.639 INFO:teuthology.orchestra.run.vm09.stdout: ceph* 2026-03-09T18:14:31.820 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:31.820 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T18:14:31.832 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:31.832 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T18:14:31.866 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T18:14:31.867 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T18:14:31.869 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:31.869 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:32.976 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:33.014 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:33.236 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:33.236 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:33.242 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:33.280 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:33.472 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:33.473 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:14:33.474 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:14:33.474 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:33.493 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:33.494 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T18:14:33.509 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:33.510 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:33.688 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:14:33.688 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T18:14:33.736 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T18:14:33.739 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:33.760 INFO:teuthology.orchestra.run.vm09.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:33.792 INFO:teuthology.orchestra.run.vm09.stdout:Looking for files to backup/remove ... 2026-03-09T18:14:33.794 INFO:teuthology.orchestra.run.vm09.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T18:14:33.797 INFO:teuthology.orchestra.run.vm09.stdout:Removing user `cephadm' ... 2026-03-09T18:14:33.797 INFO:teuthology.orchestra.run.vm09.stdout:Warning: group `nogroup' has no more members. 2026-03-09T18:14:33.799 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:33.800 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:14:33.801 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:14:33.801 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:33.807 INFO:teuthology.orchestra.run.vm09.stdout:Done. 2026-03-09T18:14:33.820 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:33.821 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T18:14:33.832 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:33.955 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:14:33.957 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:34.033 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:14:34.033 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T18:14:34.079 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T18:14:34.082 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:34.104 INFO:teuthology.orchestra.run.vm03.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:34.135 INFO:teuthology.orchestra.run.vm03.stdout:Looking for files to backup/remove ... 2026-03-09T18:14:34.137 INFO:teuthology.orchestra.run.vm03.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T18:14:34.139 INFO:teuthology.orchestra.run.vm03.stdout:Removing user `cephadm' ... 2026-03-09T18:14:34.139 INFO:teuthology.orchestra.run.vm03.stdout:Warning: group `nogroup' has no more members. 2026-03-09T18:14:34.150 INFO:teuthology.orchestra.run.vm03.stdout:Done. 2026-03-09T18:14:34.173 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:34.282 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:14:34.285 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:35.132 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:35.174 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:35.420 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:35.421 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:35.431 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:35.472 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:35.740 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:35.740 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:14:35.740 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:14:35.740 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:35.745 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:35.746 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:35.747 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:35.748 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mds* 2026-03-09T18:14:35.956 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:35.956 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T18:14:35.991 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:14:35.993 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:36.026 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:36.026 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T18:14:36.027 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T18:14:36.027 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:36.044 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:36.045 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-09T18:14:36.253 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:36.254 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T18:14:36.302 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T18:14:36.305 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:36.445 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:36.559 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:14:36.563 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:36.763 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:36.885 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:14:36.889 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:38.326 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:38.364 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:38.369 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:38.407 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:38.526 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:38.527 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:38.600 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:38.601 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev 2026-03-09T18:14:38.700 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:38.707 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:38.707 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T18:14:38.707 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* 2026-03-09T18:14:38.872 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T18:14:38.872 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T18:14:38.872 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:38.873 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev 2026-03-09T18:14:38.874 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:38.892 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:38.892 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T18:14:38.894 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-k8sevents* 2026-03-09T18:14:38.923 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:14:38.926 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:38.936 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:38.963 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.003 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.106 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T18:14:39.106 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T18:14:39.160 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T18:14:39.164 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.179 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.208 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.259 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.538 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:14:39.540 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:39.790 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:14:39.795 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:40.903 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:40.943 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:41.204 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:41.204 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:41.370 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:41.371 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:41.378 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:41.378 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T18:14:41.505 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:41.536 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:14:41.536 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T18:14:41.541 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:41.570 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:14:41.572 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:41.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:41.753 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:41.753 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:41.870 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:41.871 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:41.871 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:41.872 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:41.885 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:41.887 INFO:teuthology.orchestra.run.vm09.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T18:14:42.032 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:42.079 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:14:42.079 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T18:14:42.164 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T18:14:42.165 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:42.228 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:42.473 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:42.664 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:42.911 INFO:teuthology.orchestra.run.vm03.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.080 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.326 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.366 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.513 INFO:teuthology.orchestra.run.vm09.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.744 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:43.777 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:14:43.841 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T18:14:43.843 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.915 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:43.955 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:44.409 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:44.449 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:14:44.467 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:44.528 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T18:14:44.531 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:44.903 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:45.130 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:45.308 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:45.546 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:45.722 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:45.961 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:46.397 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:47.158 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:47.193 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:47.384 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:47.384 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:47.488 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:47.495 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:47.495 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-09T18:14:47.654 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:47.654 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T18:14:47.693 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T18:14:47.695 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:48.062 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:48.097 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:48.130 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:48.223 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:14:48.225 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:48.314 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:48.315 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:48.532 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:48.533 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:48.533 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:48.534 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:48.534 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:48.534 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:48.534 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:48.534 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:48.535 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:48.557 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:48.558 INFO:teuthology.orchestra.run.vm09.stdout: ceph-fuse* 2026-03-09T18:14:48.780 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:48.780 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T18:14:48.818 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T18:14:48.819 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:49.231 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:49.344 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:14:49.347 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:49.787 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:49.822 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:49.981 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:49.981 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:50.081 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:50.082 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:50.082 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:50.082 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:50.082 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:50.082 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:50.082 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:50.095 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:50.095 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:50.130 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:50.356 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:50.357 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:50.576 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T18:14:50.576 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:50.576 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:50.576 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:50.577 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:50.578 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:50.578 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:50.605 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:50.605 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:50.642 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:50.855 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:50.855 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:51.035 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:51.036 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:51.050 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:51.050 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:51.050 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:51.087 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:51.087 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:51.301 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:51.302 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:51.329 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:51.330 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:51.491 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:51.492 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:51.510 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T18:14:51.511 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:51.514 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:51.514 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:51.519 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:51.519 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T18:14:51.548 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:51.694 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T18:14:51.694 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T18:14:51.729 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:14:51.730 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:51.742 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:51.752 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:51.783 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:51.784 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:52.068 INFO:teuthology.orchestra.run.vm09.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T18:14:52.068 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:52.068 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:52.068 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:52.069 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:52.070 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:52.099 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:52.100 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:52.133 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:52.364 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:52.364 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:52.592 INFO:teuthology.orchestra.run.vm09.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T18:14:52.592 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:52.592 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:52.592 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T18:14:52.593 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:52.620 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:52.621 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:52.657 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:52.821 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:52.846 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:52.846 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:52.856 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T18:14:52.953 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:52.962 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:52.962 INFO:teuthology.orchestra.run.vm09.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T18:14:53.049 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:53.049 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:53.135 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T18:14:53.135 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T18:14:53.168 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T18:14:53.169 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:53.181 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:53.191 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:53.303 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T18:14:53.303 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:53.303 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:53.303 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:53.304 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:53.304 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T18:14:53.305 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:53.342 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:53.343 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:53.378 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:53.592 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:53.593 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:53.829 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T18:14:53.829 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:53.829 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:53.830 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:53.830 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:53.830 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:53.830 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:53.830 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T18:14:53.831 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:53.857 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:53.858 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:53.892 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:54.119 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:54.120 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:54.278 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:54.313 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:54.373 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:54.373 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:54.373 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:54.374 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:54.374 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T18:14:54.375 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:54.393 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:54.394 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* 2026-03-09T18:14:54.473 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:54.474 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:54.587 INFO:teuthology.orchestra.run.vm09.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T18:14:54.587 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:54.587 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:54.587 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:54.587 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:54.587 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T18:14:54.588 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:54.597 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:54.597 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T18:14:54.616 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:54.616 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:54.632 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T18:14:54.634 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:54.654 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:54.884 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:54.884 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:55.108 INFO:teuthology.orchestra.run.vm09.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T18:14:55.108 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:55.108 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:55.109 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:55.109 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:55.109 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T18:14:55.110 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:55.138 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:55.138 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:55.175 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:55.395 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:55.395 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:55.614 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T18:14:55.615 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:55.622 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:55.622 INFO:teuthology.orchestra.run.vm09.stdout: python3-rbd* 2026-03-09T18:14:55.809 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T18:14:55.809 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T18:14:55.846 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T18:14:55.849 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:55.973 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:56.010 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:56.247 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:56.248 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:56.506 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:56.507 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:56.507 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:56.507 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:56.508 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T18:14:56.509 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:56.532 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:56.533 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* libcephfs2* 2026-03-09T18:14:56.758 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:14:56.758 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T18:14:56.805 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T18:14:56.808 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:56.820 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:56.846 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:14:57.210 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:57.245 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:57.464 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:57.464 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T18:14:57.694 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:57.701 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:14:57.702 INFO:teuthology.orchestra.run.vm09.stdout: libcephfs-dev* libcephfs2* 2026-03-09T18:14:57.865 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T18:14:57.865 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T18:14:57.902 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T18:14:57.904 INFO:teuthology.orchestra.run.vm09.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:57.917 INFO:teuthology.orchestra.run.vm09.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:57.959 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:14:58.179 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:58.216 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:58.452 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:58.452 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:58.687 INFO:teuthology.orchestra.run.vm03.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T18:14:58.687 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:58.687 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:58.687 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:58.687 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T18:14:58.688 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:58.704 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:58.704 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:58.744 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:14:58.923 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:14:58.924 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:59.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:14:59.061 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:59.069 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:14:59.069 INFO:teuthology.orchestra.run.vm03.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T18:14:59.069 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T18:14:59.225 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:59.241 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:14:59.241 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T18:14:59.262 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:59.277 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T18:14:59.279 INFO:teuthology.orchestra.run.vm03.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:59.291 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:59.303 INFO:teuthology.orchestra.run.vm03.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:59.314 INFO:teuthology.orchestra.run.vm03.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:14:59.515 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:59.515 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:59.695 INFO:teuthology.orchestra.run.vm09.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout: xmlstarlet zip 2026-03-09T18:14:59.696 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:14:59.711 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:14:59.711 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:14:59.740 INFO:teuthology.orchestra.run.vm03.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:59.751 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:14:59.757 INFO:teuthology.orchestra.run.vm03.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:59.777 INFO:teuthology.orchestra.run.vm03.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:14:59.810 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:14:59.845 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:14:59.924 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:14:59.925 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:14:59.926 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:14:59.927 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:15:00.085 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:15:00.085 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:00.085 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:00.086 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:00.086 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:00.086 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:00.086 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:00.086 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:00.086 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:00.087 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:15:00.104 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:15:00.105 INFO:teuthology.orchestra.run.vm09.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T18:15:00.105 INFO:teuthology.orchestra.run.vm09.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T18:15:00.300 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T18:15:00.300 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T18:15:00.346 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T18:15:00.349 INFO:teuthology.orchestra.run.vm09.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:00.362 INFO:teuthology.orchestra.run.vm09.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:00.383 INFO:teuthology.orchestra.run.vm09.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:00.397 INFO:teuthology.orchestra.run.vm09.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:15:00.829 INFO:teuthology.orchestra.run.vm09.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:00.842 INFO:teuthology.orchestra.run.vm09.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:00.857 INFO:teuthology.orchestra.run.vm09.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:00.884 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:15:00.919 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:15:00.981 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:15:00.983 INFO:teuthology.orchestra.run.vm09.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T18:15:01.473 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:01.518 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:15:01.762 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:15:01.762 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:02.026 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:02.027 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:15:02.064 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:15:02.064 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:02.100 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:15:02.339 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:15:02.339 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:15:02.587 INFO:teuthology.orchestra.run.vm03.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T18:15:02.587 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:15:02.587 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:02.587 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:02.587 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:02.587 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:02.588 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:02.589 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:15:02.617 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:15:02.617 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:02.619 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T18:15:02.679 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T18:15:02.721 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:02.757 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:15:02.759 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:15:02.909 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:15:02.909 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:15:02.999 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T18:15:02.999 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:03.089 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:03.090 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:15:03.104 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:15:03.104 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:03.137 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:15:03.252 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T18:15:03.252 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:03.252 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:03.252 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:03.252 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:03.252 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:03.253 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:03.342 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:15:03.343 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:15:03.431 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T18:15:03.431 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T18:15:03.476 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:15:03.478 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:03.488 INFO:teuthology.orchestra.run.vm09.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T18:15:03.488 INFO:teuthology.orchestra.run.vm09.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T18:15:03.488 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:03.488 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:03.488 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:03.488 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:03.489 INFO:teuthology.orchestra.run.vm09.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T18:15:03.497 INFO:teuthology.orchestra.run.vm03.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:15:03.503 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T18:15:03.503 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:03.505 DEBUG:teuthology.orchestra.run.vm09:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T18:15:03.511 INFO:teuthology.orchestra.run.vm03.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T18:15:03.525 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:15:03.542 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:15:03.555 INFO:teuthology.orchestra.run.vm03.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:15:03.559 DEBUG:teuthology.orchestra.run.vm09:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T18:15:03.568 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:15:03.581 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:15:03.595 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:15:03.620 INFO:teuthology.orchestra.run.vm03.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:15:03.635 INFO:teuthology.orchestra.run.vm03.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:15:03.637 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:15:03.650 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:03.664 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:03.677 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:03.690 INFO:teuthology.orchestra.run.vm03.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:03.703 INFO:teuthology.orchestra.run.vm03.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T18:15:03.715 INFO:teuthology.orchestra.run.vm03.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:15:03.727 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:15:03.742 INFO:teuthology.orchestra.run.vm03.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:15:03.773 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:15:03.785 INFO:teuthology.orchestra.run.vm03.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T18:15:03.796 INFO:teuthology.orchestra.run.vm03.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:15:03.807 INFO:teuthology.orchestra.run.vm03.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:15:03.818 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:15:03.830 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T18:15:03.841 INFO:teuthology.orchestra.run.vm03.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:15:03.854 INFO:teuthology.orchestra.run.vm03.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:15:03.866 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:15:03.869 INFO:teuthology.orchestra.run.vm09.stdout:Building dependency tree... 2026-03-09T18:15:03.870 INFO:teuthology.orchestra.run.vm09.stdout:Reading state information... 2026-03-09T18:15:03.873 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T18:15:03.884 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:15:03.903 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:15:03.916 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T18:15:03.927 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:15:03.939 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:15:03.955 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:15:03.972 INFO:teuthology.orchestra.run.vm03.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:15:04.098 INFO:teuthology.orchestra.run.vm09.stdout:The following packages will be REMOVED: 2026-03-09T18:15:04.099 INFO:teuthology.orchestra.run.vm09.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T18:15:04.099 INFO:teuthology.orchestra.run.vm09.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T18:15:04.099 INFO:teuthology.orchestra.run.vm09.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T18:15:04.099 INFO:teuthology.orchestra.run.vm09.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T18:15:04.100 INFO:teuthology.orchestra.run.vm09.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T18:15:04.101 INFO:teuthology.orchestra.run.vm09.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T18:15:04.101 INFO:teuthology.orchestra.run.vm09.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T18:15:04.101 INFO:teuthology.orchestra.run.vm09.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T18:15:04.101 INFO:teuthology.orchestra.run.vm09.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T18:15:04.571 INFO:teuthology.orchestra.run.vm09.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T18:15:04.572 INFO:teuthology.orchestra.run.vm09.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T18:15:04.628 INFO:teuthology.orchestra.run.vm03.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:15:04.645 INFO:teuthology.orchestra.run.vm09.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T18:15:04.647 INFO:teuthology.orchestra.run.vm09.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:04.661 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:15:04.662 INFO:teuthology.orchestra.run.vm09.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T18:15:04.672 INFO:teuthology.orchestra.run.vm09.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T18:15:04.682 INFO:teuthology.orchestra.run.vm09.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:15:04.689 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:15:04.693 INFO:teuthology.orchestra.run.vm09.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T18:15:04.703 INFO:teuthology.orchestra.run.vm09.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T18:15:04.713 INFO:teuthology.orchestra.run.vm09.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:15:04.724 INFO:teuthology.orchestra.run.vm09.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:15:04.734 INFO:teuthology.orchestra.run.vm09.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T18:15:04.750 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T18:15:04.752 INFO:teuthology.orchestra.run.vm09.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T18:15:04.761 INFO:teuthology.orchestra.run.vm09.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T18:15:04.771 INFO:teuthology.orchestra.run.vm09.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:04.780 INFO:teuthology.orchestra.run.vm09.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:04.789 INFO:teuthology.orchestra.run.vm09.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:04.798 INFO:teuthology.orchestra.run.vm09.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T18:15:04.807 INFO:teuthology.orchestra.run.vm09.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T18:15:04.808 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T18:15:04.816 INFO:teuthology.orchestra.run.vm09.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T18:15:04.826 INFO:teuthology.orchestra.run.vm09.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T18:15:04.836 INFO:teuthology.orchestra.run.vm09.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T18:15:04.860 INFO:teuthology.orchestra.run.vm09.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T18:15:04.873 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:15:04.878 INFO:teuthology.orchestra.run.vm09.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T18:15:04.889 INFO:teuthology.orchestra.run.vm09.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T18:15:04.898 INFO:teuthology.orchestra.run.vm09.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T18:15:04.908 INFO:teuthology.orchestra.run.vm09.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T18:15:04.917 INFO:teuthology.orchestra.run.vm09.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T18:15:04.924 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:15:04.927 INFO:teuthology.orchestra.run.vm09.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T18:15:04.935 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:15:04.938 INFO:teuthology.orchestra.run.vm09.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T18:15:04.949 INFO:teuthology.orchestra.run.vm09.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:15:04.956 INFO:teuthology.orchestra.run.vm09.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T18:15:04.965 INFO:teuthology.orchestra.run.vm09.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:15:04.984 INFO:teuthology.orchestra.run.vm09.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T18:15:04.995 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:15:04.995 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T18:15:05.006 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T18:15:05.017 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T18:15:05.031 INFO:teuthology.orchestra.run.vm09.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T18:15:05.048 INFO:teuthology.orchestra.run.vm09.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T18:15:05.291 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T18:15:05.357 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T18:15:05.406 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:05.454 INFO:teuthology.orchestra.run.vm09.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T18:15:05.455 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:05.492 INFO:teuthology.orchestra.run.vm09.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T18:15:05.512 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:15:05.522 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T18:15:05.579 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:15:05.591 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T18:15:05.633 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:15:05.647 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T18:15:05.684 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:15:05.706 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T18:15:05.736 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T18:15:05.758 INFO:teuthology.orchestra.run.vm09.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T18:15:05.771 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T18:15:05.786 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T18:15:05.833 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T18:15:05.835 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:15:05.890 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:15:05.946 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:15:06.087 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:15:06.127 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T18:15:06.168 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T18:15:06.189 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T18:15:06.813 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:06.814 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:15:06.867 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T18:15:06.871 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T18:15:06.922 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T18:15:06.928 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:15:06.987 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T18:15:06.999 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T18:15:07.041 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T18:15:07.052 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T18:15:07.092 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T18:15:07.116 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:15:07.152 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T18:15:07.184 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:15:07.223 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T18:15:07.248 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T18:15:07.279 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T18:15:07.301 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:15:07.332 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T18:15:07.354 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T18:15:07.383 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T18:15:07.406 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:15:07.458 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:15:07.519 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:15:07.522 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T18:15:07.568 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:15:07.586 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T18:15:07.594 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:15:07.642 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T18:15:07.644 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:15:07.697 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T18:15:07.699 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:15:07.769 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T18:15:07.774 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:15:07.824 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:15:07.839 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T18:15:07.956 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T18:15:07.960 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T18:15:08.026 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:15:08.026 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T18:15:08.079 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T18:15:08.083 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T18:15:08.137 INFO:teuthology.orchestra.run.vm03.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:15:08.139 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T18:15:08.160 INFO:teuthology.orchestra.run.vm03.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:15:08.193 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T18:15:08.247 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T18:15:08.303 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T18:15:08.360 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T18:15:08.418 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T18:15:08.488 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T18:15:08.516 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T18:15:08.566 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T18:15:08.568 INFO:teuthology.orchestra.run.vm03.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:15:08.582 INFO:teuthology.orchestra.run.vm03.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:15:08.606 INFO:teuthology.orchestra.run.vm03.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:15:08.620 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T18:15:08.628 INFO:teuthology.orchestra.run.vm03.stdout:Removing zip (3.0-12build2) ... 2026-03-09T18:15:08.656 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:15:08.668 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:15:08.672 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T18:15:08.718 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T18:15:08.723 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T18:15:08.727 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T18:15:08.749 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T18:15:08.782 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T18:15:08.841 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T18:15:08.894 INFO:teuthology.orchestra.run.vm09.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T18:15:08.941 INFO:teuthology.orchestra.run.vm09.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T18:15:08.965 INFO:teuthology.orchestra.run.vm09.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T18:15:09.390 INFO:teuthology.orchestra.run.vm09.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T18:15:09.403 INFO:teuthology.orchestra.run.vm09.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T18:15:09.424 INFO:teuthology.orchestra.run.vm09.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T18:15:09.444 INFO:teuthology.orchestra.run.vm09.stdout:Removing zip (3.0-12build2) ... 2026-03-09T18:15:09.473 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T18:15:09.486 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T18:15:09.538 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T18:15:09.549 INFO:teuthology.orchestra.run.vm09.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T18:15:09.570 INFO:teuthology.orchestra.run.vm09.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T18:15:10.384 INFO:teuthology.orchestra.run.vm03.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T18:15:10.385 INFO:teuthology.orchestra.run.vm03.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T18:15:11.172 INFO:teuthology.orchestra.run.vm09.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T18:15:11.173 INFO:teuthology.orchestra.run.vm09.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T18:15:12.614 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:12.617 DEBUG:teuthology.parallel:result is None 2026-03-09T18:15:13.393 INFO:teuthology.orchestra.run.vm09.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T18:15:13.396 DEBUG:teuthology.parallel:result is None 2026-03-09T18:15:13.396 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-09T18:15:13.396 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm09.local 2026-03-09T18:15:13.396 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T18:15:13.396 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T18:15:13.404 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-09T18:15:13.448 DEBUG:teuthology.orchestra.run.vm09:> sudo apt-get update 2026-03-09T18:15:13.711 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T18:15:13.741 INFO:teuthology.orchestra.run.vm09.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T18:15:13.741 INFO:teuthology.orchestra.run.vm09.stdout:Hit:2 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T18:15:13.741 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T18:15:13.772 INFO:teuthology.orchestra.run.vm09.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T18:15:13.777 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T18:15:13.808 INFO:teuthology.orchestra.run.vm09.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T18:15:13.919 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T18:15:14.792 INFO:teuthology.orchestra.run.vm09.stdout:Reading package lists... 2026-03-09T18:15:14.808 DEBUG:teuthology.parallel:result is None 2026-03-09T18:15:14.811 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T18:15:14.824 DEBUG:teuthology.parallel:result is None 2026-03-09T18:15:14.824 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T18:15:14.826 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T18:15:14.826 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:15:14.827 DEBUG:teuthology.orchestra.run.vm09:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T18:15:16.694 INFO:teuthology.orchestra.run.vm09.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:============================================================================== 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#185.168.228.58 237.17.204.95 2 u 89 64 7 36.091 +0.291 0.145 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-ns8.starka.st 129.134.28.123 2 u 37 64 77 22.751 -1.249 0.692 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:+mail.anyvm.tech 66.249.115.192 3 u 43 64 77 23.580 -0.068 0.256 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:+vps-fra2.orlean 169.254.169.254 4 u 43 64 77 21.018 -0.113 0.249 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-158.101.188.125 189.97.54.122 2 u 35 64 77 20.964 -1.019 0.214 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:*node-4.infogral 168.239.11.197 2 u 44 64 77 23.600 +0.191 0.293 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-v22025082392863 129.69.253.1 2 u 45 64 77 28.247 -2.544 0.371 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-ntp2.wup-de.hos 237.17.204.95 2 u 46 64 77 33.789 +1.301 0.311 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-141.84.43.73 40.33.41.76 2 u 38 64 77 35.049 +0.303 0.340 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-cp.hypermediaa. 189.97.54.122 2 u 35 64 57 25.020 -0.609 0.221 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#ntp1.as213151.n 131.188.3.222 2 u 41 64 77 29.208 -14.585 36.956 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#185.125.190.57 194.121.207.249 2 u 53 64 77 33.312 +1.148 0.693 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#x1.ncomputers.o 82.64.42.185 2 u 33 64 77 31.048 -0.399 0.219 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#vsrv02141.custo 79.133.44.137 2 u 45 64 37 32.824 +0.833 0.257 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#185.125.190.56 79.243.60.50 2 u 46 64 77 33.396 +0.916 0.309 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:-185.232.69.65 ( .PHC0. 1 u 33 64 77 28.247 -2.826 0.140 2026-03-09T18:15:16.695 INFO:teuthology.orchestra.run.vm09.stdout:#alphyn.canonica 132.163.96.1 2 u 47 64 37 102.196 -3.256 0.160 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:-ns8.starka.st 129.134.28.123 2 u 43 64 77 22.809 -0.896 0.694 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:+vps-fra2.orlean 169.254.169.254 4 u 45 64 77 20.968 +0.255 0.309 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:+mail.anyvm.tech 66.249.115.192 3 u 43 64 77 23.521 -0.034 0.354 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:-cp.hypermediaa. 189.97.54.122 2 u 48 64 37 25.152 +0.118 0.227 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout: 185.168.228.58 237.17.204.95 2 u 27 64 27 36.382 +0.385 0.281 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:-x1.ncomputers.o 82.64.42.185 2 u 45 64 77 31.595 +0.107 0.151 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:*ntp2.wup-de.hos 237.17.204.95 2 u 46 64 77 31.250 +0.343 0.317 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:-141.84.43.73 40.33.41.76 2 u 49 64 77 31.968 -0.589 0.568 2026-03-09T18:15:16.790 INFO:teuthology.orchestra.run.vm03.stdout:-158.101.188.125 189.97.54.122 2 u 44 64 77 21.001 -0.289 0.354 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:#ntp1.as213151.n 192.150.70.56 2 u 45 64 77 28.546 +29.101 41.041 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.57 194.121.207.249 2 u 55 64 77 35.302 -0.730 0.264 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.56 79.243.60.50 2 u 55 64 77 32.207 +0.570 0.182 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:-vsrv02141.custo 79.133.44.137 2 u 41 64 77 32.734 +0.884 0.179 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:-185.232.69.65 ( .PHC0. 1 u 42 64 77 28.279 -2.433 0.133 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:#alphyn.canonica 132.163.96.1 2 u 119 64 76 98.896 +0.189 0.221 2026-03-09T18:15:16.791 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.58 145.238.80.80 2 u 58 64 77 36.582 -0.725 0.177 2026-03-09T18:15:16.791 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T18:15:16.793 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T18:15:16.793 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T18:15:16.796 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T18:15:16.798 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T18:15:16.800 INFO:teuthology.task.internal:Duration was 531.711380 seconds 2026-03-09T18:15:16.800 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T18:15:16.802 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T18:15:16.802 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:15:16.804 DEBUG:teuthology.orchestra.run.vm09:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T18:15:16.828 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T18:15:16.828 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-09T18:15:16.828 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:15:16.883 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm09.local 2026-03-09T18:15:16.883 DEBUG:teuthology.orchestra.run.vm09:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T18:15:16.897 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T18:15:16.897 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:15:16.926 DEBUG:teuthology.orchestra.run.vm09:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:15:16.968 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T18:15:16.968 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:15:16.999 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T18:15:17.004 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:15:17.005 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:15:17.005 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0%gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:15:17.005 INFO:teuthology.orchestra.run.vm03.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:15:17.005 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:15:17.011 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 87.9% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:15:17.016 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T18:15:17.016 INFO:teuthology.orchestra.run.vm09.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T18:15:17.016 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0%gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T18:15:17.017 INFO:teuthology.orchestra.run.vm09.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T18:15:17.017 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T18:15:17.023 INFO:teuthology.orchestra.run.vm09.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 87.7% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T18:15:17.023 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T18:15:17.026 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T18:15:17.026 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:15:17.062 DEBUG:teuthology.orchestra.run.vm09:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T18:15:17.073 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T18:15:17.075 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:15:17.107 DEBUG:teuthology.orchestra.run.vm09:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:15:17.112 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-09T18:15:17.123 INFO:teuthology.orchestra.run.vm09.stdout:kernel.core_pattern = core 2026-03-09T18:15:17.131 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:15:17.164 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:15:17.164 DEBUG:teuthology.orchestra.run.vm09:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T18:15:17.176 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T18:15:17.176 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T18:15:17.180 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T18:15:17.181 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597/remote/vm03 2026-03-09T18:15:17.181 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:15:17.213 DEBUG:teuthology.misc:Transferring archived files from vm09:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/597/remote/vm09 2026-03-09T18:15:17.213 DEBUG:teuthology.orchestra.run.vm09:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T18:15:17.225 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T18:15:17.225 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:15:17.258 DEBUG:teuthology.orchestra.run.vm09:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T18:15:17.268 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T18:15:17.271 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T18:15:17.271 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T18:15:17.274 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T18:15:17.274 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:15:17.302 DEBUG:teuthology.orchestra.run.vm09:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T18:15:17.304 INFO:teuthology.orchestra.run.vm03.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 18:15 /home/ubuntu/cephtest 2026-03-09T18:15:17.312 INFO:teuthology.orchestra.run.vm09.stdout: 258207 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 18:15 /home/ubuntu/cephtest 2026-03-09T18:15:17.313 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T18:15:17.318 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_ca_signed_key} duration: 531.7113797664642 flavor: default owner: kyr success: true 2026-03-09T18:15:17.318 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T18:15:17.340 INFO:teuthology.run:pass