2026-03-10T11:04:00.530 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T11:04:00.534 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T11:04:00.551 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009 branch: squid description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_ca_signed_key} email: null first_in_suite: false flavor: default job_id: '1009' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: use-ca-signed-key: true install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - client.0 - - host.b - mon.b - mgr.b - osd.1 - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNvVs3WTsKtTW/aDglOxf2SQK30IkBhqPKv3zGsDPG3gXn1XWBsHWlltslYztlyxMatjE1sd+dBKIMreynLcCbM= vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ2GUXhUVif6qjEVgI1ms03uTA2UDYL7SxI8X7GcIfxGbybHtwK5nlj74E4wxrPDTLc2ZpYUqgWbQWUpcrsplpE= tasks: - install: null - cephadm: null - cephadm.shell: host.a: - "set -ex\nHOSTNAMES=$(ceph orch host ls --format json | jq -r '.[] | .hostname')\n\ for host in $HOSTNAMES; do\n # do a check-host on each host to make sure it's\ \ reachable\n ceph cephadm check-host ${host} 2> ${host}-ok.txt\n HOST_OK=$(cat\ \ ${host}-ok.txt)\n if ! grep -q \"Host looks OK\" <<< \"$HOST_OK\"; then\n\ \ printf \"Failed host check:\\n\\n$HOST_OK\"\n exit 1\n fi\ndone\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T11:04:00.551 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T11:04:00.552 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T11:04:00.552 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T11:04:00.552 INFO:teuthology.task.internal:Checking packages... 2026-03-10T11:04:00.552 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T11:04:00.552 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T11:04:00.552 INFO:teuthology.packaging:ref: None 2026-03-10T11:04:00.552 INFO:teuthology.packaging:tag: None 2026-03-10T11:04:00.552 INFO:teuthology.packaging:branch: squid 2026-03-10T11:04:00.552 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:04:00.552 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T11:04:01.182 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:04:01.184 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T11:04:01.184 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T11:04:01.184 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T11:04:01.185 INFO:teuthology.task.internal:Saving configuration 2026-03-10T11:04:01.190 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T11:04:01.191 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T11:04:01.198 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 11:02:45.548581', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNvVs3WTsKtTW/aDglOxf2SQK30IkBhqPKv3zGsDPG3gXn1XWBsHWlltslYztlyxMatjE1sd+dBKIMreynLcCbM='} 2026-03-10T11:04:01.203 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 11:02:45.548053', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ2GUXhUVif6qjEVgI1ms03uTA2UDYL7SxI8X7GcIfxGbybHtwK5nlj74E4wxrPDTLc2ZpYUqgWbQWUpcrsplpE='} 2026-03-10T11:04:01.203 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T11:04:01.204 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0', 'client.0'] 2026-03-10T11:04:01.204 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1', 'client.1'] 2026-03-10T11:04:01.204 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T11:04:01.212 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-10T11:04:01.216 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-10T11:04:01.217 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f0a07e8be20>, signals=[15]) 2026-03-10T11:04:01.217 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T11:04:01.217 INFO:teuthology.task.internal:Opening connections... 2026-03-10T11:04:01.217 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-10T11:04:01.218 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:04:01.274 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-10T11:04:01.275 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:04:01.334 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T11:04:01.335 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-10T11:04:01.360 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-10T11:04:01.360 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Ubuntu" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="22.04" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_CODENAME=jammy 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:ID=ubuntu 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE=debian 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T11:04:01.403 INFO:teuthology.orchestra.run.vm00.stdout:UBUNTU_CODENAME=jammy 2026-03-10T11:04:01.403 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-10T11:04:01.408 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-10T11:04:01.411 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-10T11:04:01.411 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-10T11:04:01.456 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T11:04:01.456 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-10T11:04:01.456 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-10T11:04:01.456 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T11:04:01.457 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-10T11:04:01.457 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-10T11:04:01.461 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T11:04:01.463 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T11:04:01.464 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T11:04:01.464 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-10T11:04:01.465 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-10T11:04:01.500 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T11:04:01.501 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T11:04:01.502 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-10T11:04:01.509 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-10T11:04:01.511 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T11:04:01.544 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T11:04:01.545 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T11:04:01.552 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-10T11:04:01.555 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:04:01.886 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-10T11:04:01.889 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:04:02.118 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T11:04:02.119 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T11:04:02.119 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T11:04:02.120 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T11:04:02.123 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T11:04:02.125 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T11:04:02.126 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T11:04:02.126 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T11:04:02.165 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T11:04:02.169 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T11:04:02.171 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T11:04:02.171 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T11:04:02.210 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:04:02.210 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T11:04:02.212 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:04:02.212 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T11:04:02.252 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T11:04:02.259 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:04:02.260 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:04:02.263 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:04:02.264 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:04:02.265 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T11:04:02.267 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T11:04:02.267 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T11:04:02.305 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T11:04:02.317 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T11:04:02.319 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T11:04:02.319 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T11:04:02.353 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T11:04:02.360 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:04:02.398 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:04:02.442 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:04:02.442 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T11:04:02.490 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:04:02.494 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:04:02.540 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:04:02.540 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T11:04:02.594 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-10T11:04:02.595 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-10T11:04:02.654 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T11:04:02.655 INFO:teuthology.task.internal:Starting timer... 2026-03-10T11:04:02.655 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T11:04:02.658 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T11:04:02.660 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-10T11:04:02.660 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-10T11:04:02.660 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T11:04:02.660 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T11:04:02.660 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T11:04:02.660 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T11:04:02.662 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T11:04:02.662 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T11:04:02.663 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T11:04:03.397 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T11:04:03.402 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T11:04:03.403 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryqxa662p5 --limit vm00.local,vm03.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T11:06:23.643 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm03.local')] 2026-03-10T11:06:23.643 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-10T11:06:23.643 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:06:23.702 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-10T11:06:23.940 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-10T11:06:23.941 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-10T11:06:23.941 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:06:24.003 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-10T11:06:24.225 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-10T11:06:24.225 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T11:06:24.228 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T11:06:24.228 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T11:06:24.228 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:06:24.229 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T11:06:24.229 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Command line: ntpd -gq 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: ---------------------------------------------------- 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: corporation. Support and training for ntp-4 are 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: available at https://www.nwtime.org/support 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: ---------------------------------------------------- 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: proto: precision = 0.029 usec (-25) 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: basedate set to 2022-02-04 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: gps base set to 2022-02-06 (week 2196) 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T11:06:24.245 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listen normally on 3 ens3 192.168.123.100:123 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listen normally on 4 lo [::1]:123 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:0%2]:123 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:24 ntpd[16106]: Listening on routing socket on fd #22 for interface updates 2026-03-10T11:06:24.246 INFO:teuthology.orchestra.run.vm00.stderr:10 Mar 11:06:24 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Command line: ntpd -gq 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: ---------------------------------------------------- 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: corporation. Support and training for ntp-4 are 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: available at https://www.nwtime.org/support 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: ---------------------------------------------------- 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: proto: precision = 0.029 usec (-25) 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: basedate set to 2022-02-04 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: gps base set to 2022-02-06 (week 2196) 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stderr:10 Mar 11:06:24 ntpd[16108]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-10T11:06:24.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listen normally on 4 lo [::1]:123 2026-03-10T11:06:24.285 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-10T11:06:24.285 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:24 ntpd[16108]: Listening on routing socket on fd #22 for interface updates 2026-03-10T11:06:25.244 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:25 ntpd[16106]: Soliciting pool server 188.40.128.242 2026-03-10T11:06:25.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:25 ntpd[16108]: Soliciting pool server 85.215.166.214 2026-03-10T11:06:26.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:26 ntpd[16106]: Soliciting pool server 142.132.200.241 2026-03-10T11:06:26.244 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:26 ntpd[16106]: Soliciting pool server 85.220.190.246 2026-03-10T11:06:26.283 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:26 ntpd[16108]: Soliciting pool server 188.40.128.242 2026-03-10T11:06:26.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:26 ntpd[16108]: Soliciting pool server 152.53.191.142 2026-03-10T11:06:27.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:27 ntpd[16106]: Soliciting pool server 49.12.199.148 2026-03-10T11:06:27.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:27 ntpd[16106]: Soliciting pool server 129.250.35.250 2026-03-10T11:06:27.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:27 ntpd[16106]: Soliciting pool server 139.162.156.95 2026-03-10T11:06:27.283 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:27 ntpd[16108]: Soliciting pool server 85.220.190.246 2026-03-10T11:06:27.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:27 ntpd[16108]: Soliciting pool server 142.132.200.241 2026-03-10T11:06:27.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:27 ntpd[16108]: Soliciting pool server 93.241.86.156 2026-03-10T11:06:28.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:28 ntpd[16106]: Soliciting pool server 139.144.71.56 2026-03-10T11:06:28.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:28 ntpd[16106]: Soliciting pool server 139.144.71.56 2026-03-10T11:06:28.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:28 ntpd[16106]: Soliciting pool server 85.215.166.214 2026-03-10T11:06:28.243 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:28 ntpd[16106]: Soliciting pool server 172.236.195.26 2026-03-10T11:06:28.283 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:28 ntpd[16108]: Soliciting pool server 139.162.156.95 2026-03-10T11:06:28.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:28 ntpd[16108]: Soliciting pool server 49.12.199.148 2026-03-10T11:06:28.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:28 ntpd[16108]: Soliciting pool server 129.250.35.250 2026-03-10T11:06:28.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:28 ntpd[16108]: Soliciting pool server 104.167.24.26 2026-03-10T11:06:29.242 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:29 ntpd[16106]: Soliciting pool server 178.215.228.24 2026-03-10T11:06:29.242 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:29 ntpd[16106]: Soliciting pool server 173.249.58.145 2026-03-10T11:06:29.242 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:29 ntpd[16106]: Soliciting pool server 185.125.190.57 2026-03-10T11:06:29.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:29 ntpd[16108]: Soliciting pool server 172.236.195.26 2026-03-10T11:06:29.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:29 ntpd[16108]: Soliciting pool server 139.144.71.56 2026-03-10T11:06:29.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:29 ntpd[16108]: Soliciting pool server 139.144.71.56 2026-03-10T11:06:29.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:29 ntpd[16108]: Soliciting pool server 91.189.91.157 2026-03-10T11:06:30.241 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:30 ntpd[16106]: Soliciting pool server 185.125.190.56 2026-03-10T11:06:30.242 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:30 ntpd[16106]: Soliciting pool server 185.252.140.125 2026-03-10T11:06:30.242 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:30 ntpd[16106]: Soliciting pool server 93.241.86.156 2026-03-10T11:06:30.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:30 ntpd[16108]: Soliciting pool server 185.125.190.57 2026-03-10T11:06:30.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:30 ntpd[16108]: Soliciting pool server 178.215.228.24 2026-03-10T11:06:30.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:30 ntpd[16108]: Soliciting pool server 173.249.58.145 2026-03-10T11:06:31.241 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:31 ntpd[16106]: Soliciting pool server 185.125.190.58 2026-03-10T11:06:31.241 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:31 ntpd[16106]: Soliciting pool server 104.167.24.26 2026-03-10T11:06:31.241 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:31 ntpd[16106]: Soliciting pool server 2003:a:47f:abe4:48ba:cd42:dbcc:1000 2026-03-10T11:06:31.283 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:31 ntpd[16108]: Soliciting pool server 185.125.190.56 2026-03-10T11:06:31.284 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:31 ntpd[16108]: Soliciting pool server 185.252.140.125 2026-03-10T11:06:32.312 INFO:teuthology.orchestra.run.vm03.stdout:10 Mar 11:06:32 ntpd[16108]: ntpd: time slew +0.003636 s 2026-03-10T11:06:32.312 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew +0.003636s 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:32.331 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:33.265 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 11:06:33 ntpd[16106]: ntpd: time slew +0.006915 s 2026-03-10T11:06:33.266 INFO:teuthology.orchestra.run.vm00.stdout:ntpd: time slew +0.006915s 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:33.284 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:06:33.284 INFO:teuthology.run_tasks:Running task install... 2026-03-10T11:06:33.286 DEBUG:teuthology.task.install:project ceph 2026-03-10T11:06:33.286 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T11:06:33.286 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T11:06:33.286 INFO:teuthology.task.install:Using flavor: default 2026-03-10T11:06:33.288 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T11:06:33.288 INFO:teuthology.task.install:extra packages: [] 2026-03-10T11:06:33.288 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-key list | grep Ceph 2026-03-10T11:06:33.288 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-10T11:06:33.323 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T11:06:33.341 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T11:06:33.341 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T11:06:33.342 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T11:06:33.342 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T11:06:33.342 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:06:33.377 INFO:teuthology.orchestra.run.vm00.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T11:06:33.378 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T11:06:33.378 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T11:06:33.378 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T11:06:33.378 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T11:06:33.378 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:06:33.950 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T11:06:33.950 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:06:34.015 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T11:06:34.015 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:06:34.484 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:06:34.484 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T11:06:34.492 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-10T11:06:34.525 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:06:34.525 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T11:06:34.534 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T11:06:34.792 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T11:06:34.824 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T11:06:34.848 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T11:06:34.859 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T11:06:34.876 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T11:06:34.881 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T11:06:34.918 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T11:06:35.009 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T11:06:35.155 INFO:teuthology.orchestra.run.vm00.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T11:06:35.161 INFO:teuthology.orchestra.run.vm03.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T11:06:35.270 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T11:06:35.277 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T11:06:35.385 INFO:teuthology.orchestra.run.vm00.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T11:06:35.394 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T11:06:35.500 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T11:06:35.510 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T11:06:35.578 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 25.8 kB in 1s (28.9 kB/s) 2026-03-10T11:06:35.584 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 25.8 kB in 1s (27.4 kB/s) 2026-03-10T11:06:36.260 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:06:36.273 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:06:36.282 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:06:36.295 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:06:36.309 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:06:36.331 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:06:36.501 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:06:36.502 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:06:36.530 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:06:36.531 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:06:36.694 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:06:36.694 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T11:06:36.695 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T11:06:36.696 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: smart-notifier mailx | mailutils 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-10T11:06:36.697 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-10T11:06:36.701 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:06:36.701 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:06:36.701 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T11:06:36.701 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:06:36.701 INFO:teuthology.orchestra.run.vm00.stdout:The following additional packages will be installed: 2026-03-10T11:06:36.701 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:06:36.702 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout:Suggested packages: 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: smart-notifier mailx | mailutils 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout:Recommended packages: 2026-03-10T11:06:36.703 INFO:teuthology.orchestra.run.vm00.stdout: btrfs-tools 2026-03-10T11:06:36.739 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm03.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T11:06:36.740 INFO:teuthology.orchestra.run.vm00.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T11:06:36.741 INFO:teuthology.orchestra.run.vm03.stdout: socat unzip xmlstarlet zip 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout: socat unzip xmlstarlet zip 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be upgraded: 2026-03-10T11:06:36.742 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-10T11:06:36.743 INFO:teuthology.orchestra.run.vm00.stdout: librados2 librbd1 2026-03-10T11:06:36.952 INFO:teuthology.orchestra.run.vm00.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:06:36.952 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 178 MB of archives. 2026-03-10T11:06:36.952 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T11:06:36.952 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T11:06:36.954 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:06:36.954 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 178 MB of archives. 2026-03-10T11:06:36.954 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T11:06:36.954 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T11:06:37.121 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T11:06:37.121 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T11:06:37.126 INFO:teuthology.orchestra.run.vm00.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T11:06:37.126 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T11:06:37.160 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T11:06:37.161 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T11:06:37.262 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T11:06:37.262 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T11:06:37.266 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T11:06:37.267 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T11:06:37.280 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T11:06:37.280 INFO:teuthology.orchestra.run.vm00.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T11:06:37.284 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T11:06:37.285 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T11:06:37.285 INFO:teuthology.orchestra.run.vm00.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T11:06:37.285 INFO:teuthology.orchestra.run.vm00.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T11:06:37.286 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T11:06:37.286 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T11:06:37.286 INFO:teuthology.orchestra.run.vm00.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T11:06:37.287 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T11:06:37.297 INFO:teuthology.orchestra.run.vm00.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T11:06:37.298 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T11:06:37.300 INFO:teuthology.orchestra.run.vm00.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T11:06:37.300 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T11:06:37.303 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T11:06:37.303 INFO:teuthology.orchestra.run.vm00.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T11:06:37.328 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T11:06:37.337 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T11:06:37.337 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T11:06:37.338 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T11:06:37.339 INFO:teuthology.orchestra.run.vm00.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T11:06:37.339 INFO:teuthology.orchestra.run.vm00.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T11:06:37.341 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T11:06:37.342 INFO:teuthology.orchestra.run.vm00.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T11:06:37.344 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T11:06:37.344 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T11:06:37.344 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T11:06:37.345 INFO:teuthology.orchestra.run.vm00.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T11:06:37.345 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T11:06:37.346 INFO:teuthology.orchestra.run.vm00.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T11:06:37.347 INFO:teuthology.orchestra.run.vm00.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T11:06:37.347 INFO:teuthology.orchestra.run.vm00.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T11:06:37.347 INFO:teuthology.orchestra.run.vm00.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T11:06:37.375 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T11:06:37.375 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T11:06:37.375 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T11:06:37.375 INFO:teuthology.orchestra.run.vm00.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T11:06:37.375 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T11:06:37.376 INFO:teuthology.orchestra.run.vm00.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T11:06:37.376 INFO:teuthology.orchestra.run.vm00.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T11:06:37.395 INFO:teuthology.orchestra.run.vm00.stdout:Get:26 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T11:06:37.411 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T11:06:37.411 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T11:06:37.411 INFO:teuthology.orchestra.run.vm00.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T11:06:37.412 INFO:teuthology.orchestra.run.vm00.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T11:06:37.412 INFO:teuthology.orchestra.run.vm00.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T11:06:37.413 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T11:06:37.413 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T11:06:37.414 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T11:06:37.415 INFO:teuthology.orchestra.run.vm00.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T11:06:37.415 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T11:06:37.415 INFO:teuthology.orchestra.run.vm00.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T11:06:37.416 INFO:teuthology.orchestra.run.vm00.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T11:06:37.416 INFO:teuthology.orchestra.run.vm00.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T11:06:37.446 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T11:06:37.446 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T11:06:37.447 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T11:06:37.447 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T11:06:37.447 INFO:teuthology.orchestra.run.vm00.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T11:06:37.448 INFO:teuthology.orchestra.run.vm00.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T11:06:37.448 INFO:teuthology.orchestra.run.vm00.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T11:06:37.482 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T11:06:37.484 INFO:teuthology.orchestra.run.vm00.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T11:06:37.484 INFO:teuthology.orchestra.run.vm00.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T11:06:37.487 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T11:06:37.487 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T11:06:37.487 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T11:06:37.488 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T11:06:37.489 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T11:06:37.492 INFO:teuthology.orchestra.run.vm00.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T11:06:37.492 INFO:teuthology.orchestra.run.vm00.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T11:06:37.543 INFO:teuthology.orchestra.run.vm00.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T11:06:37.543 INFO:teuthology.orchestra.run.vm00.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T11:06:37.544 INFO:teuthology.orchestra.run.vm00.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T11:06:37.545 INFO:teuthology.orchestra.run.vm00.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T11:06:37.545 INFO:teuthology.orchestra.run.vm00.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T11:06:37.546 INFO:teuthology.orchestra.run.vm00.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T11:06:37.553 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T11:06:37.553 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T11:06:37.554 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T11:06:37.554 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T11:06:37.555 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T11:06:37.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T11:06:37.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T11:06:37.645 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T11:06:37.646 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T11:06:37.646 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T11:06:37.651 INFO:teuthology.orchestra.run.vm00.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T11:06:37.652 INFO:teuthology.orchestra.run.vm00.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T11:06:37.652 INFO:teuthology.orchestra.run.vm00.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T11:06:37.661 INFO:teuthology.orchestra.run.vm00.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T11:06:37.661 INFO:teuthology.orchestra.run.vm00.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T11:06:37.661 INFO:teuthology.orchestra.run.vm00.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T11:06:37.662 INFO:teuthology.orchestra.run.vm00.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T11:06:37.662 INFO:teuthology.orchestra.run.vm00.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T11:06:37.662 INFO:teuthology.orchestra.run.vm00.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T11:06:37.679 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T11:06:37.679 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T11:06:37.679 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T11:06:37.679 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T11:06:37.679 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T11:06:37.679 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T11:06:37.682 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T11:06:37.687 INFO:teuthology.orchestra.run.vm00.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T11:06:37.699 INFO:teuthology.orchestra.run.vm00.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T11:06:37.700 INFO:teuthology.orchestra.run.vm00.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T11:06:37.701 INFO:teuthology.orchestra.run.vm00.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T11:06:37.704 INFO:teuthology.orchestra.run.vm00.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T11:06:37.706 INFO:teuthology.orchestra.run.vm00.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T11:06:37.706 INFO:teuthology.orchestra.run.vm00.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T11:06:37.707 INFO:teuthology.orchestra.run.vm00.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T11:06:37.718 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T11:06:37.718 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T11:06:37.719 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T11:06:37.722 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T11:06:37.724 INFO:teuthology.orchestra.run.vm00.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T11:06:37.725 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T11:06:37.725 INFO:teuthology.orchestra.run.vm00.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T11:06:37.726 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T11:06:37.727 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T11:06:37.781 INFO:teuthology.orchestra.run.vm00.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T11:06:37.781 INFO:teuthology.orchestra.run.vm00.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T11:06:37.781 INFO:teuthology.orchestra.run.vm00.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T11:06:37.781 INFO:teuthology.orchestra.run.vm00.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T11:06:37.782 INFO:teuthology.orchestra.run.vm00.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T11:06:37.782 INFO:teuthology.orchestra.run.vm00.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T11:06:37.786 INFO:teuthology.orchestra.run.vm00.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T11:06:37.786 INFO:teuthology.orchestra.run.vm00.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T11:06:37.793 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T11:06:37.814 INFO:teuthology.orchestra.run.vm00.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T11:06:37.815 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T11:06:37.833 INFO:teuthology.orchestra.run.vm00.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T11:06:37.834 INFO:teuthology.orchestra.run.vm00.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T11:06:37.835 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T11:06:37.835 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T11:06:37.864 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T11:06:37.868 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T11:06:37.868 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T11:06:37.883 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T11:06:37.903 INFO:teuthology.orchestra.run.vm00.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T11:06:38.213 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T11:06:38.687 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T11:06:38.809 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T11:06:38.903 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T11:06:38.903 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T11:06:38.922 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T11:06:38.922 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T11:06:38.927 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T11:06:39.018 INFO:teuthology.orchestra.run.vm00.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T11:06:39.616 INFO:teuthology.orchestra.run.vm03.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T11:06:39.652 INFO:teuthology.orchestra.run.vm03.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T11:06:39.688 INFO:teuthology.orchestra.run.vm03.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T11:06:42.202 INFO:teuthology.orchestra.run.vm03.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T11:06:42.517 INFO:teuthology.orchestra.run.vm03.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T11:06:42.545 INFO:teuthology.orchestra.run.vm03.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T11:06:42.546 INFO:teuthology.orchestra.run.vm03.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T11:06:42.603 INFO:teuthology.orchestra.run.vm03.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T11:06:42.983 INFO:teuthology.orchestra.run.vm03.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T11:06:44.779 INFO:teuthology.orchestra.run.vm03.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T11:06:44.779 INFO:teuthology.orchestra.run.vm03.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T11:06:44.919 INFO:teuthology.orchestra.run.vm00.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T11:06:45.023 INFO:teuthology.orchestra.run.vm03.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T11:06:45.146 INFO:teuthology.orchestra.run.vm03.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T11:06:45.155 INFO:teuthology.orchestra.run.vm03.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T11:06:45.158 INFO:teuthology.orchestra.run.vm03.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T11:06:45.166 INFO:teuthology.orchestra.run.vm00.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T11:06:45.201 INFO:teuthology.orchestra.run.vm00.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T11:06:45.201 INFO:teuthology.orchestra.run.vm00.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T11:06:45.201 INFO:teuthology.orchestra.run.vm00.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T11:06:45.201 INFO:teuthology.orchestra.run.vm00.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T11:06:45.201 INFO:teuthology.orchestra.run.vm00.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T11:06:45.286 INFO:teuthology.orchestra.run.vm03.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T11:06:46.005 INFO:teuthology.orchestra.run.vm03.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T11:06:46.005 INFO:teuthology.orchestra.run.vm03.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T11:06:49.337 INFO:teuthology.orchestra.run.vm03.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T11:06:49.338 INFO:teuthology.orchestra.run.vm03.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T11:06:49.340 INFO:teuthology.orchestra.run.vm03.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T11:06:50.195 INFO:teuthology.orchestra.run.vm03.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T11:06:50.522 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 178 MB in 13s (13.2 MB/s) 2026-03-10T11:06:50.642 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T11:06:50.674 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T11:06:50.676 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T11:06:50.678 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T11:06:50.698 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T11:06:50.704 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T11:06:50.705 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T11:06:50.722 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T11:06:50.727 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T11:06:50.728 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T11:06:50.750 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T11:06:50.755 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T11:06:50.759 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:06:50.804 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T11:06:50.810 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T11:06:50.810 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:06:50.833 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T11:06:50.838 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T11:06:50.839 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:06:50.866 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T11:06:50.871 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T11:06:50.872 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T11:06:50.896 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:50.898 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T11:06:50.975 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:50.977 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T11:06:51.060 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-10T11:06:51.066 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T11:06:51.067 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T11:06:51.087 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T11:06:51.093 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.094 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.123 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-10T11:06:51.128 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.131 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.150 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T11:06:51.155 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:51.156 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.173 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T11:06:51.177 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.178 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.195 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T11:06:51.201 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:51.202 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.224 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T11:06:51.226 INFO:teuthology.orchestra.run.vm00.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T11:06:51.226 INFO:teuthology.orchestra.run.vm00.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T11:06:51.230 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T11:06:51.231 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T11:06:51.234 INFO:teuthology.orchestra.run.vm00.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T11:06:51.252 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T11:06:51.258 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T11:06:51.259 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T11:06:51.278 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T11:06:51.284 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.285 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.308 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T11:06:51.313 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T11:06:51.314 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T11:06:51.336 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T11:06:51.341 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T11:06:51.342 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T11:06:51.367 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T11:06:51.374 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T11:06:51.375 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T11:06:51.398 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua5.1. 2026-03-10T11:06:51.404 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T11:06:51.405 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T11:06:51.425 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-any. 2026-03-10T11:06:51.431 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T11:06:51.432 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T11:06:51.444 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package zip. 2026-03-10T11:06:51.451 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T11:06:51.456 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T11:06:51.475 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package unzip. 2026-03-10T11:06:51.481 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T11:06:51.482 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T11:06:51.502 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package luarocks. 2026-03-10T11:06:51.508 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T11:06:51.509 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T11:06:51.560 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-10T11:06:51.566 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.577 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.704 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T11:06:51.710 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.710 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.728 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T11:06:51.733 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T11:06:51.734 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T11:06:51.751 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T11:06:51.757 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.757 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:51.785 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-10T11:06:51.786 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:51.789 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:52.288 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-10T11:06:52.296 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:52.300 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:52.362 INFO:teuthology.orchestra.run.vm00.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T11:06:52.470 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T11:06:52.477 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T11:06:52.477 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T11:06:52.494 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T11:06:52.500 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T11:06:52.500 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T11:06:52.519 INFO:teuthology.orchestra.run.vm00.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T11:06:52.520 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T11:06:52.526 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T11:06:52.527 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T11:06:52.544 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T11:06:52.549 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T11:06:52.550 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T11:06:52.566 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T11:06:52.572 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T11:06:52.573 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T11:06:52.590 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T11:06:52.596 INFO:teuthology.orchestra.run.vm00.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T11:06:52.596 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T11:06:52.597 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T11:06:52.600 INFO:teuthology.orchestra.run.vm00.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T11:06:52.615 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-10T11:06:52.621 INFO:teuthology.orchestra.run.vm00.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T11:06:52.621 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T11:06:52.622 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T11:06:52.639 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T11:06:52.646 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T11:06:52.646 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T11:06:52.665 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T11:06:52.672 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T11:06:52.676 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T11:06:52.725 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T11:06:52.732 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T11:06:52.732 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T11:06:52.754 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T11:06:52.761 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T11:06:52.762 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T11:06:52.781 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-mako. 2026-03-10T11:06:52.787 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T11:06:52.788 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T11:06:52.808 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T11:06:52.814 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T11:06:52.815 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T11:06:52.830 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T11:06:52.836 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T11:06:52.837 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T11:06:52.854 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-10T11:06:52.860 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T11:06:52.861 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T11:06:52.876 INFO:teuthology.orchestra.run.vm00.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T11:06:52.881 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T11:06:52.887 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T11:06:52.889 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T11:06:52.907 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T11:06:52.913 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T11:06:52.914 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T11:06:52.928 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-paste. 2026-03-10T11:06:52.934 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T11:06:52.935 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T11:06:52.967 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T11:06:52.972 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T11:06:52.973 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T11:06:52.989 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T11:06:52.995 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T11:06:52.996 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T11:06:53.015 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T11:06:53.021 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T11:06:53.022 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T11:06:53.050 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T11:06:53.055 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T11:06:53.056 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T11:06:53.601 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T11:06:53.607 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T11:06:53.608 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T11:06:53.633 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T11:06:53.638 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:53.640 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:53.684 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T11:06:53.689 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:53.690 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:53.714 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T11:06:53.721 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:53.721 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:53.768 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T11:06:53.774 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:53.775 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:53.782 INFO:teuthology.orchestra.run.vm00.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T11:06:53.782 INFO:teuthology.orchestra.run.vm00.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T11:06:53.844 INFO:teuthology.orchestra.run.vm00.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T11:06:53.902 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T11:06:53.908 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T11:06:53.908 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T11:06:53.924 INFO:teuthology.orchestra.run.vm00.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T11:06:53.927 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T11:06:53.933 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:53.934 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:53.970 INFO:teuthology.orchestra.run.vm00.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T11:06:53.976 INFO:teuthology.orchestra.run.vm00.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T11:06:54.054 INFO:teuthology.orchestra.run.vm00.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T11:06:54.312 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-10T11:06:54.319 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:54.319 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:54.370 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T11:06:54.375 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:54.376 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:54.411 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T11:06:54.417 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:54.417 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:54.478 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-10T11:06:54.486 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:54.487 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:54.514 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T11:06:54.520 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T11:06:54.521 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T11:06:54.552 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T11:06:54.558 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:54.558 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:54.586 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T11:06:54.593 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T11:06:54.594 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T11:06:54.601 INFO:teuthology.orchestra.run.vm00.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T11:06:54.601 INFO:teuthology.orchestra.run.vm00.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T11:06:54.615 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-10T11:06:54.621 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T11:06:54.622 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T11:06:54.648 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T11:06:54.654 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:54.654 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:55.025 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T11:06:55.031 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T11:06:55.031 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T11:06:55.094 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T11:06:55.100 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T11:06:55.100 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T11:06:55.140 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T11:06:55.145 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T11:06:55.146 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T11:06:55.164 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T11:06:55.169 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T11:06:55.170 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T11:06:55.305 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T11:06:55.311 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:55.312 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:55.613 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T11:06:55.618 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T11:06:55.619 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T11:06:55.636 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T11:06:55.644 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T11:06:55.645 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T11:06:55.669 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T11:06:55.675 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T11:06:55.676 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T11:06:55.697 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T11:06:55.703 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T11:06:55.704 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T11:06:55.721 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T11:06:55.726 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T11:06:55.727 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T11:06:55.750 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T11:06:55.756 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T11:06:55.769 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T11:06:55.936 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T11:06:55.943 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:55.944 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:55.961 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T11:06:55.967 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T11:06:55.968 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T11:06:55.986 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T11:06:55.991 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T11:06:55.992 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T11:06:56.007 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-10T11:06:56.012 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T11:06:56.013 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T11:06:56.026 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-10T11:06:56.031 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T11:06:56.032 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T11:06:56.055 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T11:06:56.061 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T11:06:56.062 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T11:06:56.356 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-10T11:06:56.362 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:56.363 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:57.238 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T11:06:57.243 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:57.244 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:57.272 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T11:06:57.278 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:57.279 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:57.297 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T11:06:57.304 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T11:06:57.305 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T11:06:57.331 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T11:06:57.339 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T11:06:57.340 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T11:06:57.377 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T11:06:57.383 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T11:06:57.384 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T11:06:57.419 INFO:teuthology.orchestra.run.vm00.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T11:06:57.419 INFO:teuthology.orchestra.run.vm00.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T11:06:57.420 INFO:teuthology.orchestra.run.vm00.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T11:06:57.428 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package pkg-config. 2026-03-10T11:06:57.434 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T11:06:57.435 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T11:06:57.452 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T11:06:57.457 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T11:06:57.459 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T11:06:57.507 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T11:06:57.513 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T11:06:57.514 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T11:06:57.530 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T11:06:57.537 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T11:06:57.538 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T11:06:57.563 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T11:06:57.570 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T11:06:57.571 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T11:06:57.590 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T11:06:57.596 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T11:06:57.597 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T11:06:57.621 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-10T11:06:57.627 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T11:06:57.628 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T11:06:57.654 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T11:06:57.660 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T11:06:57.661 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T11:06:57.725 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T11:06:57.731 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T11:06:57.732 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T11:06:57.750 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-10T11:06:57.756 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T11:06:57.757 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T11:06:57.776 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T11:06:57.782 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T11:06:57.783 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T11:06:57.812 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T11:06:57.818 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T11:06:57.819 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T11:06:57.838 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T11:06:57.844 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T11:06:57.845 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T11:06:57.959 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-10T11:06:57.965 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:57.966 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:58.148 INFO:teuthology.orchestra.run.vm00.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T11:06:58.211 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T11:06:58.217 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:58.218 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:58.239 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-10T11:06:58.246 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T11:06:58.255 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T11:06:58.318 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T11:06:58.467 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 178 MB in 21s (8322 kB/s) 2026-03-10T11:06:58.555 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T11:06:58.556 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T11:06:58.658 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T11:06:58.700 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T11:06:58.703 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T11:06:58.790 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T11:06:58.873 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T11:06:58.880 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T11:06:58.880 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T11:06:58.898 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T11:06:58.903 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T11:06:58.904 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T11:06:58.925 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T11:06:58.930 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T11:06:58.934 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T11:06:58.935 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:06:59.149 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T11:06:59.161 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T11:06:59.169 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T11:06:59.175 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T11:06:59.175 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:06:59.195 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T11:06:59.200 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T11:06:59.201 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:06:59.226 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T11:06:59.232 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T11:06:59.238 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T11:06:59.238 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T11:06:59.264 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.267 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T11:06:59.358 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.363 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T11:06:59.442 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libnbd0. 2026-03-10T11:06:59.446 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T11:06:59.447 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T11:06:59.463 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T11:06:59.465 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T11:06:59.471 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.472 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.501 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rados. 2026-03-10T11:06:59.506 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.507 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.528 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T11:06:59.535 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:59.536 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.552 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T11:06:59.558 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.558 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.577 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T11:06:59.583 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:06:59.584 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.610 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T11:06:59.616 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T11:06:59.616 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T11:06:59.633 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T11:06:59.639 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T11:06:59.639 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T11:06:59.656 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T11:06:59.662 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.663 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.684 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T11:06:59.690 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T11:06:59.691 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T11:06:59.715 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T11:06:59.721 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T11:06:59.722 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T11:06:59.742 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T11:06:59.747 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T11:06:59.748 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T11:06:59.772 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua5.1. 2026-03-10T11:06:59.777 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T11:06:59.778 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T11:06:59.798 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-any. 2026-03-10T11:06:59.805 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T11:06:59.806 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T11:06:59.820 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package zip. 2026-03-10T11:06:59.826 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T11:06:59.826 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T11:06:59.845 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package unzip. 2026-03-10T11:06:59.851 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T11:06:59.851 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T11:06:59.871 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package luarocks. 2026-03-10T11:06:59.877 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T11:06:59.877 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T11:06:59.914 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T11:06:59.921 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T11:06:59.932 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.941 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librgw2. 2026-03-10T11:06:59.948 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:06:59.949 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:06:59.977 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-10T11:06:59.987 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T11:07:00.073 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T11:07:00.090 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T11:07:00.094 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:00.095 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:00.114 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T11:07:00.120 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T11:07:00.121 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T11:07:00.139 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T11:07:00.143 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T11:07:00.145 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T11:07:00.147 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:00.148 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:00.259 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T11:07:00.262 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-common. 2026-03-10T11:07:00.268 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:00.269 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:00.337 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T11:07:00.339 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T11:07:00.445 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T11:07:00.701 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T11:07:00.727 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-base. 2026-03-10T11:07:00.735 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:00.741 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:00.784 INFO:teuthology.orchestra.run.vm03.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T11:07:00.793 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T11:07:00.855 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T11:07:00.861 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T11:07:00.862 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T11:07:00.866 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T11:07:00.880 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T11:07:00.886 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T11:07:00.887 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T11:07:00.928 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T11:07:00.934 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T11:07:00.935 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T11:07:00.940 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:00.954 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T11:07:00.960 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T11:07:00.960 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T11:07:00.986 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T11:07:00.992 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T11:07:00.994 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T11:07:01.012 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T11:07:01.015 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T11:07:01.017 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T11:07:01.018 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T11:07:01.020 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T11:07:01.020 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T11:07:01.022 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T11:07:01.025 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T11:07:01.027 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T11:07:01.032 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T11:07:01.034 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T11:07:01.037 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T11:07:01.038 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-portend. 2026-03-10T11:07:01.039 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T11:07:01.044 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T11:07:01.045 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T11:07:01.063 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T11:07:01.069 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T11:07:01.071 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T11:07:01.089 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T11:07:01.096 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T11:07:01.097 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T11:07:01.130 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T11:07:01.136 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T11:07:01.137 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T11:07:01.158 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T11:07:01.163 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T11:07:01.164 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T11:07:01.165 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T11:07:01.182 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-mako. 2026-03-10T11:07:01.188 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T11:07:01.189 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T11:07:01.212 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T11:07:01.218 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T11:07:01.219 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T11:07:01.237 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T11:07:01.240 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T11:07:01.243 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T11:07:01.244 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T11:07:01.260 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webob. 2026-03-10T11:07:01.265 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T11:07:01.266 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T11:07:01.288 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T11:07:01.293 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T11:07:01.296 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T11:07:01.313 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T11:07:01.319 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T11:07:01.319 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T11:07:01.319 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T11:07:01.336 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-paste. 2026-03-10T11:07:01.341 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T11:07:01.342 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T11:07:01.379 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T11:07:01.385 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T11:07:01.386 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T11:07:01.403 INFO:teuthology.orchestra.run.vm03.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T11:07:01.405 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T11:07:01.406 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T11:07:01.411 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T11:07:01.412 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T11:07:01.431 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T11:07:01.437 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T11:07:01.438 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T11:07:01.458 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T11:07:01.463 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T11:07:01.465 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T11:07:01.500 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T11:07:01.506 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T11:07:01.507 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T11:07:01.536 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T11:07:01.542 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:07:01.543 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:01.582 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T11:07:01.587 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:01.589 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:01.606 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T11:07:01.612 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:01.614 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:01.649 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T11:07:01.655 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:01.656 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:01.737 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T11:07:01.758 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T11:07:01.764 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T11:07:01.766 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T11:07:01.789 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T11:07:01.795 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:01.796 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:01.812 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T11:07:01.815 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T11:07:01.817 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T11:07:01.913 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T11:07:02.129 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T11:07:02.147 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph. 2026-03-10T11:07:02.153 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:02.154 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.171 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T11:07:02.177 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:02.178 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.238 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T11:07:02.244 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:02.245 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.268 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T11:07:02.301 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package cephadm. 2026-03-10T11:07:02.307 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:02.308 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.343 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T11:07:02.348 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T11:07:02.349 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T11:07:02.364 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T11:07:02.380 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T11:07:02.385 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:07:02.386 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.417 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T11:07:02.423 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T11:07:02.424 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T11:07:02.444 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-routes. 2026-03-10T11:07:02.449 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T11:07:02.450 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T11:07:02.477 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T11:07:02.484 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:07:02.485 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.491 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T11:07:02.568 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T11:07:02.570 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:02.667 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T11:07:02.873 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T11:07:02.879 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T11:07:02.880 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T11:07:02.945 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T11:07:02.951 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T11:07:02.952 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T11:07:03.018 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T11:07:03.021 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T11:07:03.022 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T11:07:03.039 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T11:07:03.045 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T11:07:03.047 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T11:07:03.188 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T11:07:03.194 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:07:03.196 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:03.485 INFO:teuthology.orchestra.run.vm03.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T11:07:03.586 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T11:07:03.589 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T11:07:03.590 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T11:07:03.592 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:07:03.597 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T11:07:03.606 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T11:07:03.611 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T11:07:03.612 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T11:07:03.634 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T11:07:03.637 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T11:07:03.638 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T11:07:03.658 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T11:07:03.664 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T11:07:03.665 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T11:07:03.673 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T11:07:03.676 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T11:07:03.680 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T11:07:03.686 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T11:07:03.688 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T11:07:03.689 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T11:07:03.714 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T11:07:03.718 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T11:07:03.733 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T11:07:03.754 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T11:07:03.828 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:07:03.851 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T11:07:03.978 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T11:07:03.988 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T11:07:03.994 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:07:03.996 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:04.016 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T11:07:04.023 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T11:07:04.025 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T11:07:04.049 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T11:07:04.050 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T11:07:04.056 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T11:07:04.058 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T11:07:04.079 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package jq. 2026-03-10T11:07:04.084 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T11:07:04.086 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T11:07:04.104 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package socat. 2026-03-10T11:07:04.110 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T11:07:04.112 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T11:07:04.129 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T11:07:04.140 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T11:07:04.147 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T11:07:04.148 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T11:07:04.196 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-test. 2026-03-10T11:07:04.203 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T11:07:04.203 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:04.204 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:04.279 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T11:07:04.370 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T11:07:04.372 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T11:07:04.458 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T11:07:04.461 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T11:07:04.537 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T11:07:04.635 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T11:07:04.749 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T11:07:04.930 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T11:07:05.189 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T11:07:05.222 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T11:07:05.225 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T11:07:05.240 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T11:07:05.246 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T11:07:05.247 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:05.276 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T11:07:05.282 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:05.283 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T11:07:05.306 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T11:07:05.306 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T11:07:05.332 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T11:07:05.339 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T11:07:05.340 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T11:07:05.363 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T11:07:05.369 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T11:07:05.370 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T11:07:05.376 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T11:07:05.414 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package pkg-config. 2026-03-10T11:07:05.418 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T11:07:05.419 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T11:07:05.436 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T11:07:05.443 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T11:07:05.444 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T11:07:05.453 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T11:07:05.455 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T11:07:05.493 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T11:07:05.500 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T11:07:05.502 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T11:07:05.540 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:07:05.542 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T11:07:05.549 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T11:07:05.555 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T11:07:05.556 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T11:07:05.580 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T11:07:05.588 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T11:07:05.589 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T11:07:05.610 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T11:07:05.617 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T11:07:05.618 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T11:07:05.630 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T11:07:05.632 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T11:07:05.643 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-py. 2026-03-10T11:07:05.650 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T11:07:05.651 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T11:07:05.679 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T11:07:05.685 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T11:07:05.687 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T11:07:05.714 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T11:07:05.766 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T11:07:05.771 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T11:07:05.772 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T11:07:05.792 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-toml. 2026-03-10T11:07:05.798 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T11:07:05.799 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T11:07:05.818 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T11:07:05.825 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T11:07:05.826 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T11:07:05.856 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T11:07:05.856 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T11:07:05.863 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T11:07:05.864 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T11:07:05.888 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T11:07:05.894 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T11:07:05.895 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T11:07:05.945 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T11:07:06.007 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package radosgw. 2026-03-10T11:07:06.013 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:06.014 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.069 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T11:07:06.071 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.074 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.076 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T11:07:06.199 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T11:07:06.205 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T11:07:06.206 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.237 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package smartmontools. 2026-03-10T11:07:06.243 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T11:07:06.251 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T11:07:06.302 INFO:teuthology.orchestra.run.vm00.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T11:07:06.549 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T11:07:06.550 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T11:07:06.697 INFO:teuthology.orchestra.run.vm03.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T11:07:06.704 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.706 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.708 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.710 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.713 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:06.772 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T11:07:06.773 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T11:07:06.940 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T11:07:07.016 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T11:07:07.019 INFO:teuthology.orchestra.run.vm00.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T11:07:07.087 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T11:07:07.139 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.171 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.173 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.175 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.178 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.180 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.183 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.186 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.228 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-10T11:07:07.267 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-10T11:07:07.279 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-10T11:07:07.283 INFO:teuthology.orchestra.run.vm03.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T11:07:07.347 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T11:07:07.359 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T11:07:07.566 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T11:07:07.738 INFO:teuthology.orchestra.run.vm00.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T11:07:07.745 INFO:teuthology.orchestra.run.vm00.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T11:07:07.747 INFO:teuthology.orchestra.run.vm00.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.795 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user cephadm....done 2026-03-10T11:07:07.803 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T11:07:07.895 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T11:07:07.967 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T11:07:07.971 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T11:07:07.973 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:07.976 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:08.045 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T11:07:08.127 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T11:07:08.130 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T11:07:08.220 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T11:07:08.220 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T11:07:08.220 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T11:07:08.384 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T11:07:08.463 INFO:teuthology.orchestra.run.vm00.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T11:07:08.471 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T11:07:08.550 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T11:07:08.590 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:08.619 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:08.681 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T11:07:08.691 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T11:07:08.694 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T11:07:08.697 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T11:07:08.700 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T11:07:08.704 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T11:07:08.707 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T11:07:08.714 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T11:07:08.716 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T11:07:08.719 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T11:07:08.722 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T11:07:08.847 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T11:07:08.921 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T11:07:08.995 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T11:07:09.060 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:09.084 INFO:teuthology.orchestra.run.vm00.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T11:07:09.087 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T11:07:09.125 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T11:07:09.125 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T11:07:09.557 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:09.582 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T11:07:09.645 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T11:07:09.645 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T11:07:09.668 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T11:07:09.704 INFO:teuthology.orchestra.run.vm00.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T11:07:09.707 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T11:07:09.799 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T11:07:09.934 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T11:07:10.033 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.068 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T11:07:10.117 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T11:07:10.117 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T11:07:10.156 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T11:07:10.272 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T11:07:10.339 INFO:teuthology.orchestra.run.vm00.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T11:07:10.341 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.432 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T11:07:10.432 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.435 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.450 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.515 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T11:07:10.517 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T11:07:10.886 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.899 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.902 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:10.915 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:11.013 INFO:teuthology.orchestra.run.vm00.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T11:07:11.035 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T11:07:11.036 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:07:11.043 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T11:07:11.043 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:07:11.059 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:07:11.124 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T11:07:11.127 INFO:teuthology.orchestra.run.vm00.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T11:07:11.130 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T11:07:11.145 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T11:07:11.205 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T11:07:11.280 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:07:11.283 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T11:07:11.364 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T11:07:11.433 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T11:07:11.477 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:11.477 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-10T11:07:11.477 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:11.477 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-10T11:07:11.484 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:11.487 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T11:07:11.506 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T11:07:11.575 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T11:07:11.639 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T11:07:11.714 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T11:07:11.717 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T11:07:11.808 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T11:07:11.810 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T11:07:11.884 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T11:07:11.976 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T11:07:12.071 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T11:07:12.142 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T11:07:12.144 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T11:07:12.147 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T11:07:12.149 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T11:07:12.290 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T11:07:12.363 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T11:07:12.365 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T11:07:12.448 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:07:12.464 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T11:07:12.535 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:07:12.538 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T11:07:12.544 INFO:teuthology.orchestra.run.vm00.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T11:07:12.547 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T11:07:12.615 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:07:12.623 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T11:07:12.758 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T11:07:12.783 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:07:12.784 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:07:12.845 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T11:07:12.902 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:07:12.902 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:07:12.903 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T11:07:12.903 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:07:12.914 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-10T11:07:12.914 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath python3-xmltodict 2026-03-10T11:07:12.963 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T11:07:12.965 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:12.968 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:12.971 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T11:07:13.385 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:07:13.385 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 34.3 kB of archives. 2026-03-10T11:07:13.385 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T11:07:13.385 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T11:07:13.561 INFO:teuthology.orchestra.run.vm00.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T11:07:13.568 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.571 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.573 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.575 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.578 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.611 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T11:07:13.644 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T11:07:13.644 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T11:07:13.820 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 34.3 kB in 1s (48.7 kB/s) 2026-03-10T11:07:13.984 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.984 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T11:07:13.987 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.989 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.992 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.995 INFO:teuthology.orchestra.run.vm00.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:13.997 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:14.000 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:14.003 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:14.021 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T11:07:14.024 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T11:07:14.025 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T11:07:14.038 INFO:teuthology.orchestra.run.vm00.stdout:Adding group ceph....done 2026-03-10T11:07:14.045 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T11:07:14.051 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T11:07:14.052 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T11:07:14.079 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T11:07:14.080 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user ceph....done 2026-03-10T11:07:14.089 INFO:teuthology.orchestra.run.vm00.stdout:Setting system user ceph properties....done 2026-03-10T11:07:14.093 INFO:teuthology.orchestra.run.vm00.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T11:07:14.147 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T11:07:14.160 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T11:07:14.376 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T11:07:14.501 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:14.501 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-10T11:07:14.501 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:14.501 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-10T11:07:14.507 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-10T11:07:14.509 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:07:14.510 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T11:07:14.808 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:14.810 INFO:teuthology.orchestra.run.vm00.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:15.078 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T11:07:15.078 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T11:07:15.400 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:07:15.403 DEBUG:teuthology.parallel:result is None 2026-03-10T11:07:15.468 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:15.554 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T11:07:15.976 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:16.037 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T11:07:16.037 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T11:07:16.420 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:16.483 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T11:07:16.483 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T11:07:16.824 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:16.906 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T11:07:16.907 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T11:07:17.327 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.329 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.345 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.405 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T11:07:17.405 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T11:07:17.799 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.814 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.816 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.828 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:07:17.953 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T11:07:17.960 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:07:17.976 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:07:18.057 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T11:07:18.382 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:18.382 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T11:07:18.382 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:18.382 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T11:07:18.387 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:18.390 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T11:07:19.228 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:07:19.231 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T11:07:19.310 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:07:19.415 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:07:19.415 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:07:19.561 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:07:19.561 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:07:19.561 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T11:07:19.561 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:07:19.577 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T11:07:19.578 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath python3-xmltodict 2026-03-10T11:07:19.788 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:07:19.788 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 34.3 kB of archives. 2026-03-10T11:07:19.788 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T11:07:19.788 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T11:07:19.872 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T11:07:20.071 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 34.3 kB in 0s (116 kB/s) 2026-03-10T11:07:20.086 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T11:07:20.122 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T11:07:20.125 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T11:07:20.126 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T11:07:20.142 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T11:07:20.148 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T11:07:20.149 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T11:07:20.176 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T11:07:20.244 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T11:07:20.581 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:20.582 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T11:07:20.582 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:20.582 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T11:07:20.587 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:07:20.590 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T11:07:21.522 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:07:21.541 DEBUG:teuthology.parallel:result is None 2026-03-10T11:07:21.541 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:07:22.130 DEBUG:teuthology.orchestra.run.vm00:> dpkg-query -W -f '${Version}' ceph 2026-03-10T11:07:22.139 INFO:teuthology.orchestra.run.vm00.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:07:22.139 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:07:22.139 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T11:07:22.140 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:07:22.779 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-10T11:07:22.788 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:07:22.788 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:07:22.788 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T11:07:22.789 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T11:07:22.789 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:07:22.789 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T11:07:22.797 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:07:22.797 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T11:07:22.837 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T11:07:22.837 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:07:22.837 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T11:07:22.846 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T11:07:22.893 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:07:22.893 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T11:07:22.901 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T11:07:22.952 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T11:07:22.952 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:07:22.952 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T11:07:22.960 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T11:07:23.010 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:07:23.010 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T11:07:23.020 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T11:07:23.068 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T11:07:23.068 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:07:23.068 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T11:07:23.078 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T11:07:23.126 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:07:23.134 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T11:07:23.141 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T11:07:23.192 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T11:07:23.240 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'use-ca-signed-key': True} 2026-03-10T11:07:23.240 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:07:23.240 INFO:tasks.cephadm:Cluster fsid is 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:07:23.240 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T11:07:23.240 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.100', 'mon.b': '192.168.123.103'} 2026-03-10T11:07:23.240 INFO:tasks.cephadm:First mon is mon.a on vm00 2026-03-10T11:07:23.240 INFO:tasks.cephadm:First mgr is a 2026-03-10T11:07:23.240 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T11:07:23.240 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-10T11:07:23.249 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-10T11:07:23.257 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T11:07:23.257 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:07:23.870 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T11:07:24.489 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:07:24.490 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T11:07:24.490 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T11:07:24.490 DEBUG:teuthology.orchestra.run.vm00:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T11:07:25.872 INFO:teuthology.orchestra.run.vm00.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 11:07 /home/ubuntu/cephtest/cephadm 2026-03-10T11:07:25.872 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T11:07:27.192 INFO:teuthology.orchestra.run.vm03.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 11:07 /home/ubuntu/cephtest/cephadm 2026-03-10T11:07:27.192 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T11:07:27.196 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T11:07:27.203 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T11:07:27.204 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T11:07:27.239 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T11:07:27.332 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T11:07:27.334 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-10T11:08:08.973 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-10T11:08:13.247 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T11:08:13.259 DEBUG:teuthology.orchestra.run.vm00:> sudo ssh-keygen -t rsa -f /root/ca-key -N '' 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:Generating public/private rsa key pair. 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:Your identification has been saved in /root/ca-key 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:Your public key has been saved in /root/ca-key.pub 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:The key fingerprint is: 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:SHA256:C7l1Ij0p2PdrQ+5bh0Q5xIRupZn4SgHtZ9ffTLi34HY root@vm00 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:The key's randomart image is: 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:+---[RSA 3072]----+ 2026-03-10T11:08:13.792 INFO:teuthology.orchestra.run.vm00.stdout:| . +o | 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| . . ..o. | 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| o o =+. . | 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| o o+.O...o .| 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| . * S*... =.| 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| B.*o. o. =| 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| ...+. + o..| 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| . +o + E | 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:| o+o. . | 2026-03-10T11:08:13.793 INFO:teuthology.orchestra.run.vm00.stdout:+----[SHA256]-----+ 2026-03-10T11:08:13.793 DEBUG:teuthology.orchestra.run.vm00:> sudo cat /root/ca-key.pub 2026-03-10T11:08:13.801 INFO:teuthology.orchestra.run.vm00.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:13.801 DEBUG:teuthology.orchestra.run.vm00:> sudo echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:13.802 DEBUG:teuthology.orchestra.run.vm00:> ' | sudo tee -a /etc/ssh/ca-key.pub 2026-03-10T11:08:13.850 INFO:teuthology.orchestra.run.vm00.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:13.850 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:13.851 DEBUG:teuthology.orchestra.run.vm00:> sudo echo 'TrustedUserCAKeys /etc/ssh/ca-key.pub' | sudo tee -a /etc/ssh/sshd_config && sudo systemctl restart sshd 2026-03-10T11:08:13.903 INFO:teuthology.orchestra.run.vm00.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T11:08:13.927 DEBUG:teuthology.orchestra.run.vm03:> sudo echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:13.927 DEBUG:teuthology.orchestra.run.vm03:> ' | sudo tee -a /etc/ssh/ca-key.pub 2026-03-10T11:08:13.936 INFO:teuthology.orchestra.run.vm03.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:13.936 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:08:13.936 DEBUG:teuthology.orchestra.run.vm03:> sudo echo 'TrustedUserCAKeys /etc/ssh/ca-key.pub' | sudo tee -a /etc/ssh/sshd_config && sudo systemctl restart sshd 2026-03-10T11:08:13.986 INFO:teuthology.orchestra.run.vm03.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T11:08:14.006 DEBUG:teuthology.orchestra.run.vm00:> sudo ssh-keygen -t rsa -f /root/cephadm-ssh-key -N '' && sudo ssh-keygen -s /root/ca-key -I user_root -n root -V +52w /root/cephadm-ssh-key 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:Generating public/private rsa key pair. 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:Your identification has been saved in /root/cephadm-ssh-key 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:Your public key has been saved in /root/cephadm-ssh-key.pub 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:The key fingerprint is: 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:SHA256:9CM+10Zn24Cppxrz1/bC2o23YH7NSHpjiWjQT3Lzsqk root@vm00 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:The key's randomart image is: 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:+---[RSA 3072]----+ 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:| | 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:| | 2026-03-10T11:08:14.333 INFO:teuthology.orchestra.run.vm00.stdout:| . | 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:| . . o | 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:| S.o + + | 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:| ...o=+o.+ | 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:| =.o*+@.+o| 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:| *o+Oo&++| 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:| .oEo+O+*+| 2026-03-10T11:08:14.334 INFO:teuthology.orchestra.run.vm00.stdout:+----[SHA256]-----+ 2026-03-10T11:08:14.343 INFO:teuthology.orchestra.run.vm00.stderr:Signed user key /root/cephadm-ssh-key-cert.pub: id "user_root" serial 0 for root valid from 2026-03-10T11:07:00 to 2027-03-09T11:08:14 2026-03-10T11:08:14.344 DEBUG:teuthology.orchestra.run.vm00:> sudo cat /etc/ssh/ca-key.pub 2026-03-10T11:08:14.351 INFO:teuthology.orchestra.run.vm00.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:14.351 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:14.351 DEBUG:teuthology.orchestra.run.vm00:> sudo cat /etc/ssh/sshd_config | grep TrustedUserCAKeys 2026-03-10T11:08:14.401 INFO:teuthology.orchestra.run.vm00.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T11:08:14.401 DEBUG:teuthology.orchestra.run.vm03:> sudo cat /etc/ssh/ca-key.pub 2026-03-10T11:08:14.408 INFO:teuthology.orchestra.run.vm03.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa1jwXXSd8FkoD4hXY1tHvIse3h8CO4mURQemURxiRlj/FH6dPph7v665bcHJxMNR6JWPKDMpN2h9gK+CfEQPV7gxGfdBKwh6SEYyN5U2PKRXCHaLZCnecScnHviGfyRMsBg8OmfKqbcMFuiHH/2ACjzwO2pLe7rz2kse4c9J7KKVbBi/oYTHApFqTcW/S4rS3+UVpkJjJad9wXSDv+51YuUR9kWPq74YbAT2zxR6xMRVWqOXcbLQCUAPPAKCFDCFm0N3NGX84520n28In02WrQrMyZOdOODQRHVVVq19ANVa7ELDYUrFMoFO6eLkt/uIYXHPrvbMKgNWs9xOjswInA5RPQ3IM5Axy3UmNMdHJAks7xTfvkfVS90bTxb+tkGqJwE/8KSouMHgRnCkQuCXU+rGpIuWJ0IWTbkIaeZTSxcLoVKpzItjlVOBe0VSs1dz4vzHQkGsKN/0kKTXhPOIrzbT9lbglxzbwPxfOJSXf02D0sEylPdt0XganxoATKxU= root@vm00 2026-03-10T11:08:14.409 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:08:14.409 DEBUG:teuthology.orchestra.run.vm03:> sudo cat /etc/ssh/sshd_config | grep TrustedUserCAKeys 2026-03-10T11:08:14.460 INFO:teuthology.orchestra.run.vm03.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T11:08:14.461 DEBUG:teuthology.orchestra.run.vm00:> sudo ls /root/ 2026-03-10T11:08:14.469 INFO:teuthology.orchestra.run.vm00.stdout:ca-key 2026-03-10T11:08:14.469 INFO:teuthology.orchestra.run.vm00.stdout:ca-key.pub 2026-03-10T11:08:14.469 INFO:teuthology.orchestra.run.vm00.stdout:cephadm-ssh-key 2026-03-10T11:08:14.469 INFO:teuthology.orchestra.run.vm00.stdout:cephadm-ssh-key-cert.pub 2026-03-10T11:08:14.469 INFO:teuthology.orchestra.run.vm00.stdout:cephadm-ssh-key.pub 2026-03-10T11:08:14.469 INFO:teuthology.orchestra.run.vm00.stdout:snap 2026-03-10T11:08:14.470 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-10T11:08:14.517 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-10T11:08:14.525 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-10T11:08:14.566 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-10T11:08:14.576 INFO:tasks.cephadm:Writing seed config... 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T11:08:14.577 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T11:08:14.577 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:08:14.577 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T11:08:14.609 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 507c5972-1c71-11f1-afff-ff6f68248060 mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T11:08:14.609 DEBUG:teuthology.orchestra.run.vm00:mon.a> sudo journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a.service 2026-03-10T11:08:14.651 DEBUG:teuthology.orchestra.run.vm00:mgr.a> sudo journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a.service 2026-03-10T11:08:14.695 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T11:08:14.695 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid 507c5972-1c71-11f1-afff-ff6f68248060 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --ssh-private-key /root/cephadm-ssh-key --ssh-signed-cert /root/cephadm-ssh-key-cert.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:08:14.832 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T11:08:14.832 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', '507c5972-1c71-11f1-afff-ff6f68248060', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--ssh-private-key', '/root/cephadm-ssh-key', '--ssh-signed-cert', '/root/cephadm-ssh-key-cert.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-10T11:08:14.832 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T11:08:14.832 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-10T11:08:14.832 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-10T11:08:14.832 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-10T11:08:14.835 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T11:08:14.835 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T11:08:14.837 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T11:08:14.837 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.839 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T11:08:14.839 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T11:08:14.842 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T11:08:14.842 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.844 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T11:08:14.844 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T11:08:14.846 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T11:08:14.846 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.849 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T11:08:14.849 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T11:08:14.852 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T11:08:14.852 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.855 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T11:08:14.857 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T11:08:14.857 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T11:08:14.857 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-10T11:08:14.857 INFO:teuthology.orchestra.run.vm00.stdout:docker (/usr/bin/docker) is present 2026-03-10T11:08:14.857 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-10T11:08:14.857 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-10T11:08:14.859 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T11:08:14.859 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T11:08:14.861 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T11:08:14.861 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.863 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T11:08:14.863 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T11:08:14.865 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T11:08:14.865 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.868 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T11:08:14.868 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T11:08:14.871 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T11:08:14.871 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.873 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T11:08:14.873 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T11:08:14.875 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T11:08:14.875 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T11:08:14.878 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 139732253591776 on /run/cephadm/507c5972-1c71-11f1-afff-ff6f68248060.lock 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Lock 139732253591776 acquired on /run/cephadm/507c5972-1c71-11f1-afff-ff6f68248060.lock 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-10T11:08:14.881 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T11:08:14.882 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.100 metric 100 2026-03-10T11:08:14.882 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T11:08:14.882 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T11:08:14.882 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.100 metric 100 2026-03-10T11:08:14.883 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T11:08:14.883 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T11:08:14.885 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T11:08:14.886 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T11:08:14.886 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T11:08:14.886 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T11:08:14.886 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T11:08:14.886 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T11:08:15.860 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T11:08:15.860 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T11:08:15.860 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:08:15.860 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:08:16.067 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T11:08:16.068 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T11:08:16.068 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T11:08:16.229 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-10T11:08:16.229 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-10T11:08:16.354 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCg+69pteQnExAAA/dcFTBIZCnM/8XPsCGPZg== 2026-03-10T11:08:16.483 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCg+69pIpvcGhAAo1/8YGYdyb5TsF/2ei3/8w== 2026-03-10T11:08:16.590 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCg+69poPJpIRAATczUGe/SViDqEdRzk7yiTQ== 2026-03-10T11:08:16.590 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for a [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:16.767 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-10T11:08:16.923 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.850+0000 7f994feded80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 1 imported monmap: 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T11:08:16.714328+0000 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 0 /usr/bin/ceph-mon: set fsid to 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Git sha 0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: DB SUMMARY 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: DB Session ID: NKDU74N64FPCG55Q8IVM 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.env: 0x55e2269aedc0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.info_log: 0x55e22ebc6da0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:08:16.924 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.db_log_dir: 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.wal_dir: 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.write_buffer_manager: 0x55e22ebbd5e0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.row_cache: None 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.wal_filter: None 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:08:16.925 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.wal_compression: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Compression algorithms supported: 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kZSTD supported: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:08:16.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.854+0000 7f994feded80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.merge_operator: 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e22ebb9520) 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55e22ebdf350 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T11:08:16.927 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.num_levels: 7 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:08:16.928 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1fb7274-a743-4122-9bda-193345ebf799 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.858+0000 7f994feded80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f994feded80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e22ebe0e00 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f994feded80 4 rocksdb: DB pointer 0x55e22ecc4000 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f9947668640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f9947668640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T11:08:16.929 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55e22ebdf350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1e-05 secs_since: 0 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f994feded80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f994feded80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T11:08:16.862+0000 7f994feded80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T11:08:16.930 INFO:teuthology.orchestra.run.vm00.stdout:create mon.a on 2026-03-10T11:08:17.093 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-10T11:08:17.257 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T11:08:17.489 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-507c5972-1c71-11f1-afff-ff6f68248060.target → /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060.target. 2026-03-10T11:08:17.489 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-507c5972-1c71-11f1-afff-ff6f68248060.target → /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060.target. 2026-03-10T11:08:17.682 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a 2026-03-10T11:08:17.682 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a.service: Unit ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a.service not loaded. 2026-03-10T11:08:17.865 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060.target.wants/ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a.service → /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service. 2026-03-10T11:08:17.876 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T11:08:17.876 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T11:08:17.876 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-10T11:08:17.876 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.150599s) 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-10T11:08:18.309 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T11:08:18.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 bash[20277]: cluster 2026-03-10T11:08:18.114362+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:08:18.612 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:18.612 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T11:08:18.612 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:18.612 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T11:08:18.612 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T11:08:18.613 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-10T11:08:18.850 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-10T11:08:18.967 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 systemd[1]: Stopping Ceph mon.a for 507c5972-1c71-11f1-afff-ff6f68248060... 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 bash[20277]: debug 2026-03-10T11:08:18.894+0000 7f6f1056a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 bash[20277]: debug 2026-03-10T11:08:18.894+0000 7f6f1056a640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 bash[20669]: ceph-507c5972-1c71-11f1-afff-ff6f68248060-mon-a 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 systemd[1]: ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a.service: Deactivated successfully. 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 systemd[1]: Stopped Ceph mon.a for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:08:19.133 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:18 vm00 systemd[1]: Started Ceph mon.a for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:08:19.266 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T11:08:19.267 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:08:19.268 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-10T11:08:19.268 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T11:08:19.268 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.130+0000 7ffb4b34dd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.130+0000 7ffb4b34dd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.130+0000 7ffb4b34dd80 0 pidfile_write: ignore empty --pid-file 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 0 load: jerasure load: lrc 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Git sha 0 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: DB SUMMARY 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: DB Session ID: XL3KQDYG54YGK2G5O63Q 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75507 ; 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.env: 0x55ec40208dc0 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.info_log: 0x55ec56ecad00 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:08:19.426 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.db_log_dir: 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.wal_dir: 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.write_buffer_manager: 0x55ec56ecf900 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.row_cache: None 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.wal_filter: None 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.wal_compression: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:08:19.427 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:08:19.428 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Compression algorithms supported: 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kZSTD supported: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.merge_operator: 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ec56eca480) 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cache_index_and_filter_blocks: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: pin_top_level_index_and_filter: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: index_type: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: data_block_index_type: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: index_shortening: 1 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: checksum: 4 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: no_block_cache: 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_cache: 0x55ec56ef1350 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_cache_name: BinnedLRUCache 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_cache_options: 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: capacity : 536870912 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: num_shard_bits : 4 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: strict_capacity_limit : 0 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: high_pri_pool_ratio: 0.000 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_cache_compressed: (nil) 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: persistent_cache: (nil) 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_size: 4096 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_size_deviation: 10 2026-03-10T11:08:19.429 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_restart_interval: 16 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: index_block_restart_interval: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: metadata_block_size: 4096 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: partition_filters: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: use_delta_encoding: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: filter_policy: bloomfilter 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: whole_key_filtering: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: verify_compression: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: read_amp_bytes_per_bit: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: format_version: 5 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: enable_index_compression: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: block_align: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: max_auto_readahead_size: 262144 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: prepopulate_block_cache: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: initial_auto_readahead_size: 8192 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: num_file_reads_for_auto_readahead: 2 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.num_levels: 7 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:08:19.430 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1fb7274-a743-4122-9bda-193345ebf799 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773140899138713, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.134+0000 7ffb4b34dd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.138+0000 7ffb4b34dd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773140899139991, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70867, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65346, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773140899, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1fb7274-a743-4122-9bda-193345ebf799", "db_session_id": "XL3KQDYG54YGK2G5O63Q", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.138+0000 7ffb4b34dd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773140899140046, "job": 1, "event": "recovery_finished"} 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: debug 2026-03-10T11:08:19.138+0000 7ffb4b34dd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148020+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148020+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148087+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148087+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148093+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:19.431 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148093+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148100+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T11:08:16.714328+0000 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148100+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T11:08:16.714328+0000 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148111+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148111+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148116+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148116+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148125+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148125+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148129+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148129+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148440+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148440+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148481+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148481+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148957+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T11:08:19.432 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 bash[20758]: cluster 2026-03-10T11:08:19.148957+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T11:08:19.493 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a 2026-03-10T11:08:19.493 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a.service: Unit ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a.service not loaded. 2026-03-10T11:08:19.710 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060.target.wants/ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a.service → /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service. 2026-03-10T11:08:19.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:08:19.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:19 vm00 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:08:19.717 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:19 vm00 systemd[1]: Started Ceph mgr.a for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:08:19.722 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T11:08:19.722 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T11:08:19.722 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T11:08:19.722 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T11:08:19.722 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-10T11:08:19.722 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "507c5972-1c71-11f1-afff-ff6f68248060", 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T11:08:20.011 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T11:08:18:119019+0000", 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T11:08:18.119733+0000", 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:20.012 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[20758]: audit 2026-03-10T11:08:19.226189+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/377900789' entity='client.admin' 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[20758]: audit 2026-03-10T11:08:19.226189+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/377900789' entity='client.admin' 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[20758]: audit 2026-03-10T11:08:19.927519+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/722572403' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[20758]: audit 2026-03-10T11:08:19.927519+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/722572403' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[21028]: debug 2026-03-10T11:08:19.998+0000 7f70b6ce0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[21028]: debug 2026-03-10T11:08:20.046+0000 7f70b6ce0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:08:20.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[21028]: debug 2026-03-10T11:08:20.170+0000 7f70b6ce0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:08:20.942 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[21028]: debug 2026-03-10T11:08:20.482+0000 7f70b6ce0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:08:21.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:20 vm00 bash[21028]: debug 2026-03-10T11:08:20.938+0000 7f70b6ce0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:08:21.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.026+0000 7f70b6ce0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:08:21.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:08:21.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:08:21.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: from numpy import show_config as show_numpy_config 2026-03-10T11:08:21.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.150+0000 7f70b6ce0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:08:21.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.282+0000 7f70b6ce0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:08:21.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.318+0000 7f70b6ce0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:08:21.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.354+0000 7f70b6ce0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:08:21.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.398+0000 7f70b6ce0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:08:21.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.446+0000 7f70b6ce0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:08:22.141 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.878+0000 7f70b6ce0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:08:22.141 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.922+0000 7f70b6ce0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:08:22.141 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:21 vm00 bash[21028]: debug 2026-03-10T11:08:21.958+0000 7f70b6ce0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "507c5972-1c71-11f1-afff-ff6f68248060", 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:22.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T11:08:22.300 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T11:08:18:119019+0000", 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T11:08:18.119733+0000", 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:22.301 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-10T11:08:22.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[20758]: audit 2026-03-10T11:08:22.233384+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/3326470489' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:08:22.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[20758]: audit 2026-03-10T11:08:22.233384+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/3326470489' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:08:22.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.138+0000 7f70b6ce0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:08:22.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.182+0000 7f70b6ce0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:08:22.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.234+0000 7f70b6ce0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:08:22.482 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.370+0000 7f70b6ce0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:08:22.783 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.526+0000 7f70b6ce0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:08:22.783 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.702+0000 7f70b6ce0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:08:22.783 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.734+0000 7f70b6ce0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:08:23.039 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.782+0000 7f70b6ce0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:08:23.039 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:22 vm00 bash[21028]: debug 2026-03-10T11:08:22.934+0000 7f70b6ce0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: cluster 2026-03-10T11:08:23.296445+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: cluster 2026-03-10T11:08:23.296445+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: cluster 2026-03-10T11:08:23.344325+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0480255s) 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: cluster 2026-03-10T11:08:23.344325+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0480255s) 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347676+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347676+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347778+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347778+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347851+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347851+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347924+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.347924+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.348710+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.348710+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: cluster 2026-03-10T11:08:23.356849+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: cluster 2026-03-10T11:08:23.356849+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.371341+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:08:23.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.371341+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.373530+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.373530+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.374511+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.374511+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.376179+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.376179+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.379611+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[20758]: audit 2026-03-10T11:08:23.379611+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:08:23.733 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:23 vm00 bash[21028]: debug 2026-03-10T11:08:23.290+0000 7f70b6ce0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "507c5972-1c71-11f1-afff-ff6f68248060", 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T11:08:24.705 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T11:08:18:119019+0000", 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T11:08:18.119733+0000", 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:24.706 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T11:08:25.219 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: cluster 2026-03-10T11:08:24.353887+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.05759s) 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: cluster 2026-03-10T11:08:24.353887+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.05759s) 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: audit 2026-03-10T11:08:24.663996+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3273142066' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: audit 2026-03-10T11:08:24.663996+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3273142066' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: audit 2026-03-10T11:08:24.933706+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: audit 2026-03-10T11:08:24.933706+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: audit 2026-03-10T11:08:24.936398+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T11:08:25.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[20758]: audit 2026-03-10T11:08:24.936398+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T11:08:26.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[21028]: ignoring --setuser ceph since I am not root 2026-03-10T11:08:26.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:25 vm00 bash[21028]: ignoring --setgroup ceph since I am not root 2026-03-10T11:08:26.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[21028]: debug 2026-03-10T11:08:26.062+0000 7f0d733db140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:08:26.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[21028]: debug 2026-03-10T11:08:26.110+0000 7f0d733db140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:08:26.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:26.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T11:08:26.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T11:08:26.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T11:08:26.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T11:08:26.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:26.414 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T11:08:26.414 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 4... 2026-03-10T11:08:26.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: audit 2026-03-10T11:08:25.487518+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T11:08:26.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: audit 2026-03-10T11:08:25.487518+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T11:08:26.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: audit 2026-03-10T11:08:25.937011+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T11:08:26.589 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: audit 2026-03-10T11:08:25.937011+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T11:08:26.590 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: cluster 2026-03-10T11:08:25.940524+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T11:08:26.590 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: cluster 2026-03-10T11:08:25.940524+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T11:08:26.590 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: audit 2026-03-10T11:08:26.344256+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.100:0/3546445853' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:08:26.590 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[20758]: audit 2026-03-10T11:08:26.344256+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.100:0/3546445853' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:08:26.590 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[21028]: debug 2026-03-10T11:08:26.238+0000 7f0d733db140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:08:26.982 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:26 vm00 bash[21028]: debug 2026-03-10T11:08:26.586+0000 7f0d733db140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:08:27.422 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.058+0000 7f0d733db140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:08:27.422 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.150+0000 7f0d733db140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:08:27.422 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:08:27.423 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:08:27.423 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: from numpy import show_config as show_numpy_config 2026-03-10T11:08:27.423 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.278+0000 7f0d733db140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:08:27.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.418+0000 7f0d733db140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:08:27.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.458+0000 7f0d733db140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:08:27.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.494+0000 7f0d733db140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:08:27.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.538+0000 7f0d733db140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:08:27.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:27 vm00 bash[21028]: debug 2026-03-10T11:08:27.586+0000 7f0d733db140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:08:28.319 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.050+0000 7f0d733db140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:08:28.319 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.086+0000 7f0d733db140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:08:28.319 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.126+0000 7f0d733db140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:08:28.319 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.274+0000 7f0d733db140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:08:28.636 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.318+0000 7f0d733db140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:08:28.636 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.354+0000 7f0d733db140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:08:28.636 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.470+0000 7f0d733db140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:08:28.888 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.634+0000 7f0d733db140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:08:28.888 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.810+0000 7f0d733db140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:08:28.888 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.842+0000 7f0d733db140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:08:29.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:28 vm00 bash[21028]: debug 2026-03-10T11:08:28.886+0000 7f0d733db140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:08:29.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[21028]: debug 2026-03-10T11:08:29.038+0000 7f0d733db140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.276091+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.276091+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.276506+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon a 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.276506+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon a 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.281620+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.281620+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.281736+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00532358s) 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.281736+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00532358s) 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.284294+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.284294+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.284651+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.284651+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.285390+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.285390+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:08:29.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.285783+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.285783+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.286151+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.286151+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.292348+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon a is now available 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: cluster 2026-03-10T11:08:29.292348+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon a is now available 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.304068+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.304068+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.307267+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.307267+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.320466+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.320466+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.322399+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[20758]: audit 2026-03-10T11:08:29.322399+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:29.733 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:29 vm00 bash[21028]: debug 2026-03-10T11:08:29.270+0000 7f0d733db140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:08:30.344 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:30.344 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T11:08:30.344 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T11:08:30.344 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:30.344 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 4 is available 2026-03-10T11:08:30.344 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: cephadm 2026-03-10T11:08:29.301893+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: cephadm 2026-03-10T11:08:29.301893+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.328871+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.328871+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.348060+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.348060+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.944210+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.944210+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.947010+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: audit 2026-03-10T11:08:29.947010+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: cluster 2026-03-10T11:08:30.282550+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e6: a(active, since 1.00614s) 2026-03-10T11:08:30.721 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:30 vm00 bash[20758]: cluster 2026-03-10T11:08:30.282550+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e6: a(active, since 1.00614s) 2026-03-10T11:08:31.076 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T11:08:31.076 INFO:teuthology.orchestra.run.vm00.stdout:Using provided ssh private key and signed cert ... 2026-03-10T11:08:31.626 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.285017+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.285017+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.289077+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.289077+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.698541+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.698541+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.704251+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.704251+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.711326+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:30.711326+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: cephadm 2026-03-10T11:08:30.970524+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:11:08:30] ENGINE Bus STARTING 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: cephadm 2026-03-10T11:08:30.970524+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:11:08:30] ENGINE Bus STARTING 2026-03-10T11:08:31.941 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:31.183743+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:31.942 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:31.183743+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:31.942 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:31.321702+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:31.942 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:31.321702+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:31.942 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:31.593920+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:31.942 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:31 vm00 bash[20758]: audit 2026-03-10T11:08:31.593920+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:32.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.018365+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.818 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.018365+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.081740+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Client ('192.168.123.100', 33726) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.081740+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Client ('192.168.123.100', 33726) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.081796+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.081796+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.182904+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.182904+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.183108+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Bus STARTED 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.183108+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Bus STARTED 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.318892+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.318892+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.322461+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Set ssh ssh_identity_key 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.322461+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Set ssh ssh_identity_key 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.322487+0000 mgr.a (mgr.14118) 13 : cephadm [INF] Set ssh private key 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.322487+0000 mgr.a (mgr.14118) 13 : cephadm [INF] Set ssh private key 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.591057+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.591057+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.594702+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Set ssh ssh_identity_cert 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cephadm 2026-03-10T11:08:31.594702+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Set ssh ssh_identity_cert 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cluster 2026-03-10T11:08:31.713522+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: cluster 2026-03-10T11:08:31.713522+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.847777+0000 mgr.a (mgr.14118) 16 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:32.819 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:32 vm00 bash[20758]: audit 2026-03-10T11:08:31.847777+0000 mgr.a (mgr.14118) 16 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:33.731 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:33 vm00 bash[20758]: cephadm 2026-03-10T11:08:32.913041+0000 mgr.a (mgr.14118) 17 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T11:08:33.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:33 vm00 bash[20758]: cephadm 2026-03-10T11:08:32.913041+0000 mgr.a (mgr.14118) 17 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T11:08:34.878 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-10T11:08:34.878 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mon service... 2026-03-10T11:08:35.204 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T11:08:35.204 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mgr service... 2026-03-10T11:08:35.478 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:34.819275+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:34.819275+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: cephadm 2026-03-10T11:08:34.819761+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Added host vm00 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: cephadm 2026-03-10T11:08:34.819761+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Added host vm00 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:34.822899+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:34.822899+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:35.164438+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:35.164438+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:35.437665+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:35.437665+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:35.727656+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/3584038836' entity='client.admin' 2026-03-10T11:08:36.020 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:35 vm00 bash[20758]: audit 2026-03-10T11:08:35.727656+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/3584038836' entity='client.admin' 2026-03-10T11:08:36.049 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:35.147016+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:35.147016+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: cephadm 2026-03-10T11:08:35.147783+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: cephadm 2026-03-10T11:08:35.147783+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:35.434014+0000 mgr.a (mgr.14118) 21 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:35.434014+0000 mgr.a (mgr.14118) 21 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: cephadm 2026-03-10T11:08:35.434712+0000 mgr.a (mgr.14118) 22 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: cephadm 2026-03-10T11:08:35.434712+0000 mgr.a (mgr.14118) 22 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:36.003749+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/1381496584' entity='client.admin' 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:36.003749+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/1381496584' entity='client.admin' 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:36.400441+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:36.400441+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:36.520616+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:37.071 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:36 vm00 bash[20758]: audit 2026-03-10T11:08:36.520616+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:37.364 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[21028]: ignoring --setuser ceph since I am not root 2026-03-10T11:08:37.364 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[21028]: ignoring --setgroup ceph since I am not root 2026-03-10T11:08:37.364 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[21028]: debug 2026-03-10T11:08:37.190+0000 7f1c897a2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:08:37.364 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[21028]: debug 2026-03-10T11:08:37.234+0000 7f1c897a2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T11:08:37.467 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 8... 2026-03-10T11:08:37.680 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[21028]: debug 2026-03-10T11:08:37.362+0000 7f1c897a2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: audit 2026-03-10T11:08:36.825964+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: audit 2026-03-10T11:08:36.825964+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: audit 2026-03-10T11:08:37.004731+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: audit 2026-03-10T11:08:37.004731+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: cluster 2026-03-10T11:08:37.007184+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: cluster 2026-03-10T11:08:37.007184+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: audit 2026-03-10T11:08:37.421119+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.100:0/2321814259' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[20758]: audit 2026-03-10T11:08:37.421119+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.100:0/2321814259' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:08:37.982 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:37 vm00 bash[21028]: debug 2026-03-10T11:08:37.678+0000 7f1c897a2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:08:38.465 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.102+0000 7f1c897a2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:08:38.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.194+0000 7f1c897a2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:08:38.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:08:38.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:08:38.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: from numpy import show_config as show_numpy_config 2026-03-10T11:08:38.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.322+0000 7f1c897a2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:08:38.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.462+0000 7f1c897a2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:08:38.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.502+0000 7f1c897a2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:08:38.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.542+0000 7f1c897a2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:08:38.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.586+0000 7f1c897a2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:08:38.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:38 vm00 bash[21028]: debug 2026-03-10T11:08:38.638+0000 7f1c897a2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:08:39.402 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.114+0000 7f1c897a2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:08:39.403 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.158+0000 7f1c897a2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:08:39.403 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.198+0000 7f1c897a2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:08:39.403 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.358+0000 7f1c897a2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:08:39.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.398+0000 7f1c897a2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:08:39.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.442+0000 7f1c897a2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:08:39.732 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.558+0000 7f1c897a2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:08:39.998 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.730+0000 7f1c897a2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:08:39.999 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.914+0000 7f1c897a2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:08:39.999 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.950+0000 7f1c897a2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:08:40.385 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:39 vm00 bash[21028]: debug 2026-03-10T11:08:39.994+0000 7f1c897a2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:08:40.385 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[21028]: debug 2026-03-10T11:08:40.154+0000 7f1c897a2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.387412+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.387412+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.387674+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon a 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.387674+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon a 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.391705+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.391705+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.392293+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00472655s) 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.392293+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00472655s) 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.393866+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.393866+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.394712+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.394712+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.395339+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.395339+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.395478+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.395478+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.395587+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.395587+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.400702+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon a is now available 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: cluster 2026-03-10T11:08:40.400702+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon a is now available 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.420137+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.420137+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.432004+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:08:40.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[20758]: audit 2026-03-10T11:08:40.432004+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:08:40.733 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:08:40 vm00 bash[21028]: debug 2026-03-10T11:08:40.382+0000 7f1c897a2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:08:41.446 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T11:08:41.447 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T11:08:41.447 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T11:08:41.447 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T11:08:41.447 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 8 is available 2026-03-10T11:08:41.447 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-10T11:08:41.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:41 vm00 bash[20758]: audit 2026-03-10T11:08:40.444460+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:08:41.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:41 vm00 bash[20758]: audit 2026-03-10T11:08:40.444460+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:08:41.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:41 vm00 bash[20758]: cluster 2026-03-10T11:08:41.394534+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e10: a(active, since 1.00697s) 2026-03-10T11:08:41.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:41 vm00 bash[20758]: cluster 2026-03-10T11:08:41.394534+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e10: a(active, since 1.00697s) 2026-03-10T11:08:41.848 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T11:08:41.848 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-10T11:08:42.280 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$QoCAdLOYqi2duLxL/xKWgOv7nVfm/2nkiNADNmf1u/1/JHSN3jh7e", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773140922, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T11:08:42.281 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-10T11:08:42.557 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T11:08:42.557 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T11:08:42.557 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout: Password: n7ylq0x1nt 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.558 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config directory 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.551176+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTING 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.551176+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTING 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.654390+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.654390+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:41.706823+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:41.706823+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.767129+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.767129+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.767170+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTED 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.767170+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTED 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.767509+0000 mgr.a (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Client ('192.168.123.100', 43220) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: cephadm 2026-03-10T11:08:41.767509+0000 mgr.a (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Client ('192.168.123.100', 43220) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:41.772475+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:41.772475+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:41.774702+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:41.774702+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:42.084553+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:42.084553+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:42.241141+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:42.241141+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:42.516630+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.100:0/739713134' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T11:08:42.905 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:42 vm00 bash[20758]: audit 2026-03-10T11:08:42.516630+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.100:0/739713134' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:08:42.935 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-10T11:08:42.952 INFO:tasks.cephadm:Fetching config... 2026-03-10T11:08:42.953 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:08:42.953 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T11:08:42.955 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T11:08:42.955 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:08:42.956 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T11:08:43.002 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T11:08:43.002 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:08:43.002 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/keyring of=/dev/stdout 2026-03-10T11:08:43.050 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T11:08:44.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:43 vm00 bash[20758]: audit 2026-03-10T11:08:42.894244+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.100:0/3483490652' entity='client.admin' 2026-03-10T11:08:44.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:43 vm00 bash[20758]: audit 2026-03-10T11:08:42.894244+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.100:0/3483490652' entity='client.admin' 2026-03-10T11:08:44.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:43 vm00 bash[20758]: cluster 2026-03-10T11:08:43.243393+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T11:08:44.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:43 vm00 bash[20758]: cluster 2026-03-10T11:08:43.243393+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T11:08:46.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:45 vm00 bash[20758]: audit 2026-03-10T11:08:44.797498+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:46.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:45 vm00 bash[20758]: audit 2026-03-10T11:08:44.797498+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:46.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:45 vm00 bash[20758]: audit 2026-03-10T11:08:45.418218+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:46.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:45 vm00 bash[20758]: audit 2026-03-10T11:08:45.418218+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:47.667 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:08:47.931 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:47 vm00 bash[20758]: cluster 2026-03-10T11:08:46.800726+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T11:08:47.931 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:47 vm00 bash[20758]: cluster 2026-03-10T11:08:46.800726+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T11:08:47.977 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T11:08:47.977 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T11:08:48.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:48 vm00 bash[20758]: audit 2026-03-10T11:08:47.917549+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.100:0/3088215485' entity='client.admin' 2026-03-10T11:08:48.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:48 vm00 bash[20758]: audit 2026-03-10T11:08:47.917549+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.100:0/3088215485' entity='client.admin' 2026-03-10T11:08:51.674 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:08:52.070 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm03 2026-03-10T11:08:52.071 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:08:52.071 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.conf 2026-03-10T11:08:52.074 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:08:52.074 DEBUG:teuthology.orchestra.run.vm03:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:08:52.120 INFO:tasks.cephadm:Adding host vm03 to orchestrator... 2026-03-10T11:08:52.120 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch host add vm03 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.241736+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.241736+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.244072+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.244072+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.244676+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.244676+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.247301+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.247301+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.253201+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.253201+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.255654+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.255654+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.941792+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.941792+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.944772+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.944772+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.945481+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.945481+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.946517+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.946517+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:08:52.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.947007+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:51.947007+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:51.947698+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:51.947698+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:51.981622+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:51.981622+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:52.018546+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:52.018546+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:52.061052+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: cephadm 2026-03-10T11:08:52.061052+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:52.096475+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:52.096475+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:52.099252+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:52.099252+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:52.101844+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:52.483 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:52 vm00 bash[20758]: audit 2026-03-10T11:08:52.101844+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:08:56.729 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:08:57.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:57 vm00 bash[20758]: audit 2026-03-10T11:08:56.981796+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:57.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:57 vm00 bash[20758]: audit 2026-03-10T11:08:56.981796+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:08:58.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:58 vm00 bash[20758]: cephadm 2026-03-10T11:08:57.899794+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T11:08:58.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:08:58 vm00 bash[20758]: cephadm 2026-03-10T11:08:57.899794+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T11:08:59.715 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm03' with addr '192.168.123.103' 2026-03-10T11:08:59.767 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch host ls --format=json 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: audit 2026-03-10T11:08:59.713930+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: audit 2026-03-10T11:08:59.713930+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: cephadm 2026-03-10T11:08:59.714733+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: cephadm 2026-03-10T11:08:59.714733+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: audit 2026-03-10T11:08:59.715172+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: audit 2026-03-10T11:08:59.715172+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: audit 2026-03-10T11:09:00.001709+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:00 vm00 bash[20758]: audit 2026-03-10T11:09:00.001709+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:01 vm00 bash[20758]: cluster 2026-03-10T11:09:00.396470+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:01 vm00 bash[20758]: cluster 2026-03-10T11:09:00.396470+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:01 vm00 bash[20758]: audit 2026-03-10T11:09:01.298973+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:01 vm00 bash[20758]: audit 2026-03-10T11:09:01.298973+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:03.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:02 vm00 bash[20758]: audit 2026-03-10T11:09:01.868996+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:03.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:02 vm00 bash[20758]: audit 2026-03-10T11:09:01.868996+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:04.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:03 vm00 bash[20758]: cluster 2026-03-10T11:09:02.396685+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:04.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:03 vm00 bash[20758]: cluster 2026-03-10T11:09:02.396685+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:04.382 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:09:04.650 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:09:04.650 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}] 2026-03-10T11:09:04.707 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T11:09:04.707 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd crush tunables default 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cluster 2026-03-10T11:09:04.396868+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cluster 2026-03-10T11:09:04.396868+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.605900+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.605900+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.608170+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.608170+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.610814+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.610814+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.612841+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.612841+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.613342+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.613342+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.613989+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.613989+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.614397+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.614397+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.615010+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.615010+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.643935+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.643935+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.650948+0000 mgr.a (mgr.14150) 23 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.650948+0000 mgr.a (mgr.14150) 23 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.679553+0000 mgr.a (mgr.14150) 24 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.679553+0000 mgr.a (mgr.14150) 24 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.715534+0000 mgr.a (mgr.14150) 25 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: cephadm 2026-03-10T11:09:04.715534+0000 mgr.a (mgr.14150) 25 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:09:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.755734+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.983 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.755734+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.983 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.758132+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.983 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.758132+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.983 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.760783+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:05.983 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:05 vm00 bash[20758]: audit 2026-03-10T11:09:04.760783+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:07.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:07 vm00 bash[20758]: cluster 2026-03-10T11:09:06.397029+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:07.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:07 vm00 bash[20758]: cluster 2026-03-10T11:09:06.397029+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:08.392 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:09:09.615 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-10T11:09:09.667 INFO:tasks.cephadm:Adding mon.a on vm00 2026-03-10T11:09:09.667 INFO:tasks.cephadm:Adding mon.b on vm03 2026-03-10T11:09:09.667 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch apply mon '2;vm00:192.168.123.100=a;vm03:192.168.123.103=b' 2026-03-10T11:09:09.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:09 vm00 bash[20758]: cluster 2026-03-10T11:09:08.397176+0000 mgr.a (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:09.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:09 vm00 bash[20758]: cluster 2026-03-10T11:09:08.397176+0000 mgr.a (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:09.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:09 vm00 bash[20758]: audit 2026-03-10T11:09:08.634792+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T11:09:09.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:09 vm00 bash[20758]: audit 2026-03-10T11:09:08.634792+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T11:09:10.778 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:10.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:10 vm00 bash[20758]: audit 2026-03-10T11:09:09.616119+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T11:09:10.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:10 vm00 bash[20758]: audit 2026-03-10T11:09:09.616119+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T11:09:10.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:10 vm00 bash[20758]: cluster 2026-03-10T11:09:09.617961+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:10.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:10 vm00 bash[20758]: cluster 2026-03-10T11:09:09.617961+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:11.029 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mon update... 2026-03-10T11:09:11.146 DEBUG:teuthology.orchestra.run.vm03:mon.b> sudo journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.b.service 2026-03-10T11:09:11.147 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T11:09:11.147 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph mon dump -f json 2026-03-10T11:09:12.300 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.b/config 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: cluster 2026-03-10T11:09:10.397325+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: cluster 2026-03-10T11:09:10.397325+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.024848+0000 mgr.a (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm00:192.168.123.100=a;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.024848+0000 mgr.a (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm00:192.168.123.100=a;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: cephadm 2026-03-10T11:09:11.025936+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm03:192.168.123.103=b;count:2 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: cephadm 2026-03-10T11:09:11.025936+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm03:192.168.123.103=b;count:2 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.029066+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.029066+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.030113+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.030113+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.031113+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.031113+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.031618+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.031618+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.034313+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.034313+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.035387+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.035387+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.036032+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: audit 2026-03-10T11:09:11.036032+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: cephadm 2026-03-10T11:09:11.036679+0000 mgr.a (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T11:09:12.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:12 vm00 bash[20758]: cephadm 2026-03-10T11:09:11.036679+0000 mgr.a (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T11:09:12.653 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 systemd[1]: Started Ceph mon.b for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:09:12.894 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:09:12.895 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":1,"fsid":"507c5972-1c71-11f1-afff-ff6f68248060","modified":"2026-03-10T11:08:16.714328Z","created":"2026-03-10T11:08:16.714328Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T11:09:12.895 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 1 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 0 load: jerasure load: lrc 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Git sha 0 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: DB SUMMARY 2026-03-10T11:09:12.907 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: DB Session ID: NC1UHMHC1Y7RN49TW9N7 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.env: 0x5599ceb1adc0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.info_log: 0x5599ed323880 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.db_log_dir: 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.wal_dir: 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.785+0000 7f7768218d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.write_buffer_manager: 0x5599ed327900 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:09:12.908 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.row_cache: None 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.wal_filter: None 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Compression algorithms supported: 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kZSTD supported: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:09:12.909 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.merge_operator: 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5599ed322480) 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cache_index_and_filter_blocks: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: pin_top_level_index_and_filter: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: index_type: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: data_block_index_type: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: index_shortening: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: checksum: 4 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: no_block_cache: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_cache: 0x5599ed349350 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_cache_name: BinnedLRUCache 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_cache_options: 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: capacity : 536870912 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: num_shard_bits : 4 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: strict_capacity_limit : 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: high_pri_pool_ratio: 0.000 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_cache_compressed: (nil) 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: persistent_cache: (nil) 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_size: 4096 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_size_deviation: 10 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_restart_interval: 16 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: index_block_restart_interval: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: metadata_block_size: 4096 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: partition_filters: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: use_delta_encoding: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: filter_policy: bloomfilter 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: whole_key_filtering: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: verify_compression: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: read_amp_bytes_per_bit: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: format_version: 5 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: enable_index_compression: 1 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: block_align: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: max_auto_readahead_size: 262144 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: prepopulate_block_cache: 0 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: initial_auto_readahead_size: 8192 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: num_file_reads_for_auto_readahead: 2 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:09:12.910 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.num_levels: 7 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:09:12.911 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 439ba08e-692c-4234-b282-3a8ee27a6561 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773140952795292, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.789+0000 7f7768218d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.797+0000 7f7768218d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773140952804062, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773140952, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "439ba08e-692c-4234-b282-3a8ee27a6561", "db_session_id": "NC1UHMHC1Y7RN49TW9N7", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.797+0000 7f7768218d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773140952804140, "job": 1, "event": "recovery_finished"} 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.797+0000 7f7768218d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5599ed34ae00 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 4 rocksdb: DB pointer 0x5599ed456000 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-10T11:09:12.912 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 0 using public_addr v2:192.168.123.103:0/0 -> [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 0 starting mon.b rank -1 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.801+0000 7f7768218d80 1 mon.b@-1(???) e0 preinit fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.805+0000 7f775dfe2640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.805+0000 7f775dfe2640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: ** DB Stats ** 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: ** Compaction Stats [default] ** 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.009 0 0 0.0 0.0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.009 0 0 0.0 0.0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.009 0 0 0.0 0.0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: ** Compaction Stats [default] ** 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.009 0 0 0.0 0.0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Block cache BinnedLRUCache@0x5599ed349350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: e1 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: btime 2026-03-10T11:08:18:119019+0000 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: legacy client fscid: -1 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: No filesystems configured 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.837+0000 7f7760fe8640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.845+0000 7f7760fe8640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.845+0000 7f7760fe8640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.845+0000 7f7760fe8640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T11:09:12.913 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: debug 2026-03-10T11:09:12.845+0000 7f7760fe8640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:18.119467+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:18.119467+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:18.114362+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:18.114362+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148020+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148020+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148087+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148087+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148093+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148093+0000 mon.a (mon.0) 3 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148100+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148100+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148111+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148111+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148116+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148116+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148125+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148125+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148129+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148129+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148440+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148440+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148481+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148481+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148957+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:19.148957+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:19.226189+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/377900789' entity='client.admin' 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:19.226189+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/377900789' entity='client.admin' 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:19.927519+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/722572403' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:19.927519+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/722572403' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:22.233384+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/3326470489' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:22.233384+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/3326470489' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:23.296445+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:23.296445+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:23.344325+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0480255s) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:23.344325+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0480255s) 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347676+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347676+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347778+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347778+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347851+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347851+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347924+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.347924+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:12.914 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.348710+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.348710+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:23.356849+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:23.356849+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon a is now available 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.371341+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.371341+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.373530+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.373530+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.374511+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.374511+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.376179+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.376179+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.379611+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:23.379611+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.100:0/451846537' entity='mgr.a' 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:24.353887+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.05759s) 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:24.353887+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: a(active, since 1.05759s) 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:24.663996+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3273142066' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:24.663996+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.100:0/3273142066' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:24.933706+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:24.933706+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:24.936398+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:24.936398+0000 mon.a (mon.0) 31 : audit [INF] from='client.? 192.168.123.100:0/202838315' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:25.487518+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:25.487518+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:25.937011+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:25.937011+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3251125133' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:25.940524+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:25.940524+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:26.344256+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.100:0/3546445853' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:26.344256+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.100:0/3546445853' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.276091+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.276091+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.276506+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon a 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.276506+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon a 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.281620+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.281620+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.281736+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00532358s) 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.281736+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00532358s) 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.284294+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.284294+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.284651+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.284651+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.285390+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.285390+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:09:12.915 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.285783+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.285783+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.286151+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.286151+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.292348+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon a is now available 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:29.292348+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon a is now available 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.304068+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.304068+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.307267+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.307267+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.320466+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.320466+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.322399+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.322399+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:29.301893+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:29.301893+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.328871+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.328871+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.348060+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.348060+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.944210+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.944210+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.947010+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:29.947010+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:30.282550+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e6: a(active, since 1.00614s) 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:30.282550+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e6: a(active, since 1.00614s) 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.285017+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.285017+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.289077+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.289077+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.698541+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.698541+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.704251+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.704251+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.711326+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:30.711326+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:30.970524+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:11:08:30] ENGINE Bus STARTING 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:30.970524+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:11:08:30] ENGINE Bus STARTING 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.183743+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.183743+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.321702+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.321702+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.593920+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.593920+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.018365+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.018365+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.081740+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Client ('192.168.123.100', 33726) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.081740+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Client ('192.168.123.100', 33726) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:09:12.916 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.081796+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.081796+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.182904+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.182904+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.183108+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Bus STARTED 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.183108+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:11:08:31] ENGINE Bus STARTED 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.318892+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.318892+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.322461+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Set ssh ssh_identity_key 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.322461+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Set ssh ssh_identity_key 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.322487+0000 mgr.a (mgr.14118) 13 : cephadm [INF] Set ssh private key 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.322487+0000 mgr.a (mgr.14118) 13 : cephadm [INF] Set ssh private key 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.591057+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.591057+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.594702+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Set ssh ssh_identity_cert 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:31.594702+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Set ssh ssh_identity_cert 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:31.713522+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:31.713522+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.847777+0000 mgr.a (mgr.14118) 16 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:31.847777+0000 mgr.a (mgr.14118) 16 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:32.913041+0000 mgr.a (mgr.14118) 17 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:32.913041+0000 mgr.a (mgr.14118) 17 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:34.819275+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:34.819275+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:34.819761+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Added host vm00 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:34.819761+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Added host vm00 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:34.822899+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:34.822899+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.164438+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.164438+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.437665+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.437665+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.727656+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/3584038836' entity='client.admin' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.727656+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.100:0/3584038836' entity='client.admin' 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.147016+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.147016+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:35.147783+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:35.147783+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.434014+0000 mgr.a (mgr.14118) 21 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:35.434014+0000 mgr.a (mgr.14118) 21 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:35.434712+0000 mgr.a (mgr.14118) 22 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T11:09:13.319 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:35.434712+0000 mgr.a (mgr.14118) 22 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.003749+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/1381496584' entity='client.admin' 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.003749+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.100:0/1381496584' entity='client.admin' 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.400441+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.400441+0000 mon.a (mon.0) 67 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.520616+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.520616+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.825964+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:36.825964+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.100:0/1218535973' entity='mgr.a' 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:37.004731+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:37.004731+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.100:0/2409685244' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:37.007184+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:37.007184+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e8: a(active, since 7s) 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:37.421119+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.100:0/2321814259' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:37.421119+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.100:0/2321814259' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.387412+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.387412+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon a restarted 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.387674+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon a 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.387674+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon a 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.391705+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.391705+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.392293+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00472655s) 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.392293+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00472655s) 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.393866+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.393866+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.394712+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.394712+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.395339+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.395339+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.395478+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.395478+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.395587+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.395587+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.400702+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon a is now available 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:40.400702+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon a is now available 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.420137+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.420137+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.432004+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.432004+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.444460+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:40.444460+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:41.394534+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e10: a(active, since 1.00697s) 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:41.394534+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e10: a(active, since 1.00697s) 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.551176+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTING 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.551176+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTING 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.654390+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.654390+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:41.706823+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:41.706823+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.767129+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.767129+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.767170+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTED 2026-03-10T11:09:13.320 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.767170+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Bus STARTED 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.767509+0000 mgr.a (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Client ('192.168.123.100', 43220) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:41.767509+0000 mgr.a (mgr.14150) 8 : cephadm [INF] [10/Mar/2026:11:08:41] ENGINE Client ('192.168.123.100', 43220) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:41.772475+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:41.772475+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:41.774702+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:41.774702+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.084553+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.084553+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.241141+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.241141+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.516630+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.100:0/739713134' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.516630+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.100:0/739713134' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.894244+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.100:0/3483490652' entity='client.admin' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:42.894244+0000 mon.a (mon.0) 91 : audit [INF] from='client.? 192.168.123.100:0/3483490652' entity='client.admin' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:43.243393+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:43.243393+0000 mon.a (mon.0) 92 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:44.797498+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:44.797498+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:45.418218+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:45.418218+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:46.800726+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:08:46.800726+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:47.917549+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.100:0/3088215485' entity='client.admin' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:47.917549+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.100:0/3088215485' entity='client.admin' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.241736+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.241736+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.244072+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.244072+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.244676+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.244676+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.247301+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.247301+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.253201+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.253201+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.255654+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.255654+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.321 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.941792+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.941792+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.944772+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.944772+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.945481+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.945481+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.946517+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.946517+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.947007+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:51.947007+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:51.947698+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:51.947698+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:51.981622+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:51.981622+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:52.018546+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:52.018546+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:52.061052+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:52.061052+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:52.096475+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:52.096475+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:52.099252+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:52.099252+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:52.101844+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:52.101844+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:56.981796+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:56.981796+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:57.899794+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:57.899794+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:59.713930+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:59.713930+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:59.714733+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:08:59.714733+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm03 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:59.715172+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:08:59.715172+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:00.001709+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:00.001709+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:00.396470+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:00.396470+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:01.298973+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:01.298973+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:01.868996+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:01.868996+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:02.396685+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:02.396685+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:04.396868+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:04.396868+0000 mgr.a (mgr.14150) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.605900+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.605900+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.608170+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.608170+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.610814+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.322 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.610814+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.612841+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.612841+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.613342+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.613342+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.613989+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.613989+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.614397+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.614397+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.615010+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.615010+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.643935+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.643935+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.650948+0000 mgr.a (mgr.14150) 23 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.650948+0000 mgr.a (mgr.14150) 23 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.679553+0000 mgr.a (mgr.14150) 24 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.679553+0000 mgr.a (mgr.14150) 24 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.715534+0000 mgr.a (mgr.14150) 25 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:04.715534+0000 mgr.a (mgr.14150) 25 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.client.admin.keyring 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.755734+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.755734+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.758132+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.758132+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.760783+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:04.760783+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:06.397029+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:06.397029+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:08.397176+0000 mgr.a (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:08.397176+0000 mgr.a (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:08.634792+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:08.634792+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:09.616119+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:09.616119+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.100:0/3368325859' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:09.617961+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:09.617961+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:10.397325+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cluster 2026-03-10T11:09:10.397325+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.024848+0000 mgr.a (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm00:192.168.123.100=a;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.024848+0000 mgr.a (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm00:192.168.123.100=a;vm03:192.168.123.103=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:11.025936+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm03:192.168.123.103=b;count:2 2026-03-10T11:09:13.323 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:11.025936+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm03:192.168.123.103=b;count:2 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.029066+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.029066+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.030113+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.030113+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.031113+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.031113+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.031618+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.031618+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.034313+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.034313+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.035387+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.035387+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.036032+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: audit 2026-03-10T11:09:11.036032+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:11.036679+0000 mgr.a (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T11:09:13.324 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:12 vm03 bash[23405]: cephadm 2026-03-10T11:09:11.036679+0000 mgr.a (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm03 2026-03-10T11:09:14.006 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T11:09:14.006 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph mon dump -f json 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:12.397473+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:12.397473+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:12.933347+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:12.933347+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:12.934159+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:12.934159+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:12.935818+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:12.935818+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:13.922094+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:13.922094+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:14.397637+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:14.397637+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:14.923010+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:14.923010+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:14.923138+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:14.923138+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:15.922486+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: audit 2026-03-10T11:09:15.922486+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:16.397798+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:17 vm03 bash[23405]: cluster 2026-03-10T11:09:16.397798+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:16.922541+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:16.922541+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.922486+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.922486+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.940751+0000 mon.a (mon.0) 149 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.940751+0000 mon.a (mon.0) 149 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944006+0000 mon.a (mon.0) 150 : cluster [DBG] monmap epoch 2 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944006+0000 mon.a (mon.0) 150 : cluster [DBG] monmap epoch 2 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944024+0000 mon.a (mon.0) 151 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944024+0000 mon.a (mon.0) 151 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944035+0000 mon.a (mon.0) 152 : cluster [DBG] last_changed 2026-03-10T11:09:12.930129+0000 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944035+0000 mon.a (mon.0) 152 : cluster [DBG] last_changed 2026-03-10T11:09:12.930129+0000 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944045+0000 mon.a (mon.0) 153 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944045+0000 mon.a (mon.0) 153 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944054+0000 mon.a (mon.0) 154 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944054+0000 mon.a (mon.0) 154 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944065+0000 mon.a (mon.0) 155 : cluster [DBG] election_strategy: 1 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944065+0000 mon.a (mon.0) 155 : cluster [DBG] election_strategy: 1 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944075+0000 mon.a (mon.0) 156 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944075+0000 mon.a (mon.0) 156 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944085+0000 mon.a (mon.0) 157 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944085+0000 mon.a (mon.0) 157 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944334+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944334+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944355+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944355+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944475+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e12: a(active, since 37s) 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944475+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e12: a(active, since 37s) 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944589+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: cluster 2026-03-10T11:09:17.944589+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.946791+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.946791+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.949726+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.949726+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.952937+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.952937+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.953870+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.953870+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.954335+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:18.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:18 vm03 bash[23405]: audit 2026-03-10T11:09:17.954335+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: cluster 2026-03-10T11:09:12.397473+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: cluster 2026-03-10T11:09:12.397473+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: audit 2026-03-10T11:09:12.933347+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: audit 2026-03-10T11:09:12.933347+0000 mon.a (mon.0) 141 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: audit 2026-03-10T11:09:12.934159+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: audit 2026-03-10T11:09:12.934159+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: cluster 2026-03-10T11:09:12.935818+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:17 vm00 bash[20758]: cluster 2026-03-10T11:09:12.935818+0000 mon.a (mon.0) 143 : cluster [INF] mon.a calling monitor election 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:13.922094+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:13.922094+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:14.397637+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:14.397637+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:14.923010+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:14.923010+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:14.923138+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:14.923138+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:15.922486+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:15.922486+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:16.397798+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:16.397798+0000 mgr.a (mgr.14150) 34 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:16.922541+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:16.922541+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.922486+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.922486+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:18.324 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.940751+0000 mon.a (mon.0) 149 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.940751+0000 mon.a (mon.0) 149 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944006+0000 mon.a (mon.0) 150 : cluster [DBG] monmap epoch 2 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944006+0000 mon.a (mon.0) 150 : cluster [DBG] monmap epoch 2 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944024+0000 mon.a (mon.0) 151 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944024+0000 mon.a (mon.0) 151 : cluster [DBG] fsid 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944035+0000 mon.a (mon.0) 152 : cluster [DBG] last_changed 2026-03-10T11:09:12.930129+0000 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944035+0000 mon.a (mon.0) 152 : cluster [DBG] last_changed 2026-03-10T11:09:12.930129+0000 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944045+0000 mon.a (mon.0) 153 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944045+0000 mon.a (mon.0) 153 : cluster [DBG] created 2026-03-10T11:08:16.714328+0000 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944054+0000 mon.a (mon.0) 154 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944054+0000 mon.a (mon.0) 154 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944065+0000 mon.a (mon.0) 155 : cluster [DBG] election_strategy: 1 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944065+0000 mon.a (mon.0) 155 : cluster [DBG] election_strategy: 1 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944075+0000 mon.a (mon.0) 156 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944075+0000 mon.a (mon.0) 156 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944085+0000 mon.a (mon.0) 157 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944085+0000 mon.a (mon.0) 157 : cluster [DBG] 1: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.b 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944334+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944334+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944355+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944355+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944475+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e12: a(active, since 37s) 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944475+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e12: a(active, since 37s) 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944589+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: cluster 2026-03-10T11:09:17.944589+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.946791+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.946791+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.949726+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.949726+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.952937+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.952937+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.953870+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.953870+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.954335+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:18.325 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:18 vm00 bash[20758]: audit 2026-03-10T11:09:17.954335+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:18.525 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.b/config 2026-03-10T11:09:18.878 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:09:18.878 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":2,"fsid":"507c5972-1c71-11f1-afff-ff6f68248060","modified":"2026-03-10T11:09:12.930129Z","created":"2026-03-10T11:08:16.714328Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T11:09:18.878 INFO:teuthology.orchestra.run.vm03.stderr:dumped monmap epoch 2 2026-03-10T11:09:19.019 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T11:09:19.019 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph config generate-minimal-conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.954900+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.954900+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.955283+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.955283+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.988415+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.988415+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.994212+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:17.994212+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.025029+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.025029+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.027963+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.027963+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.041353+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.041353+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.043973+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.043973+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.046672+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.046672+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.058416+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.058416+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.060850+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.060850+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.063116+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.063116+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.065355+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.065355+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.065593+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.065593+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.065910+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.065910+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.066305+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.066305+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.066650+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.066650+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.067051+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.067051+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.447320+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.447320+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.450367+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.450367+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.451031+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.451031+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.451521+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.451521+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.451921+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.451921+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.878332+0000 mon.a (mon.0) 184 : audit [DBG] from='client.? 192.168.123.103:0/3670556488' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.878332+0000 mon.a (mon.0) 184 : audit [DBG] from='client.? 192.168.123.103:0/3670556488' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.922833+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.922833+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.992821+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.992821+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.997326+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.997326+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.998221+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.998221+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.999275+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.999275+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.999780+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:18.999780+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:19.003915+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:19 vm03 bash[23405]: audit 2026-03-10T11:09:19.003915+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.954900+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.954900+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.955283+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.955283+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.988415+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.988415+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.994212+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:17.994212+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/config/ceph.conf 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.025029+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.025029+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.027963+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.027963+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.041353+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.041353+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.043973+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.043973+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.046672+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.046672+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.058416+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.058416+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.060850+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.060850+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.063116+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.063116+0000 mon.a (mon.0) 174 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.065355+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.065355+0000 mon.a (mon.0) 175 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.065593+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.065593+0000 mgr.a (mgr.14150) 39 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.065910+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.065910+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.066305+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.066305+0000 mon.a (mon.0) 177 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.066650+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.066650+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.067051+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.067051+0000 mgr.a (mgr.14150) 40 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.447320+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.447320+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.450367+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.450367+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.451031+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.451031+0000 mon.a (mon.0) 181 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.451521+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.322 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.451521+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.451921+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.451921+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.878332+0000 mon.a (mon.0) 184 : audit [DBG] from='client.? 192.168.123.103:0/3670556488' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.878332+0000 mon.a (mon.0) 184 : audit [DBG] from='client.? 192.168.123.103:0/3670556488' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.922833+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.922833+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.992821+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.992821+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.997326+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.997326+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.998221+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.998221+0000 mon.a (mon.0) 188 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.999275+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.999275+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.999780+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:18.999780+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:19.003915+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:19.323 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[20758]: audit 2026-03-10T11:09:19.003915+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:09:19 vm00 bash[21028]: debug 2026-03-10T11:09:19.918+0000 7f1c55b0e640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:20 vm00 bash[20758]: cluster 2026-03-10T11:09:18.397989+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:20 vm00 bash[20758]: cluster 2026-03-10T11:09:18.397989+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:20 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.450865+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:20 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.450865+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:20 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.452349+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T11:09:20.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:20 vm00 bash[20758]: cephadm 2026-03-10T11:09:18.452349+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T11:09:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:20 vm03 bash[23405]: cluster 2026-03-10T11:09:18.397989+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:20 vm03 bash[23405]: cluster 2026-03-10T11:09:18.397989+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:20 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.450865+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:09:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:20 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.450865+0000 mgr.a (mgr.14150) 42 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:09:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:20 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.452349+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T11:09:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:20 vm03 bash[23405]: cephadm 2026-03-10T11:09:18.452349+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Reconfiguring daemon mon.b on vm03 2026-03-10T11:09:21.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:21 vm00 bash[20758]: cluster 2026-03-10T11:09:20.398148+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:21.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:21 vm00 bash[20758]: cluster 2026-03-10T11:09:20.398148+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:21.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:21 vm03 bash[23405]: cluster 2026-03-10T11:09:20.398148+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:21.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:21 vm03 bash[23405]: cluster 2026-03-10T11:09:20.398148+0000 mgr.a (mgr.14150) 44 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:23.637 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:09:23.875 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:23.875 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-10T11:09:23.875 INFO:teuthology.orchestra.run.vm00.stdout: fsid = 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:09:23.875 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] 2026-03-10T11:09:23.934 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:23 vm00 bash[20758]: cluster 2026-03-10T11:09:22.398395+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:23.934 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:23 vm00 bash[20758]: cluster 2026-03-10T11:09:22.398395+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:23.935 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T11:09:23.935 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:09:23.935 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T11:09:23.942 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:09:23.942 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:23.994 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:09:23.994 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T11:09:24.001 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:09:24.001 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:09:24.051 INFO:tasks.cephadm:Adding mgr.a on vm00 2026-03-10T11:09:24.051 INFO:tasks.cephadm:Adding mgr.b on vm03 2026-03-10T11:09:24.051 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch apply mgr '2;vm00=a;vm03=b' 2026-03-10T11:09:24.097 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:23 vm03 bash[23405]: cluster 2026-03-10T11:09:22.398395+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:24.097 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:23 vm03 bash[23405]: cluster 2026-03-10T11:09:22.398395+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:24.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:24 vm00 bash[20758]: audit 2026-03-10T11:09:23.875686+0000 mon.a (mon.0) 192 : audit [DBG] from='client.? 192.168.123.100:0/1165237359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:24.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:24 vm00 bash[20758]: audit 2026-03-10T11:09:23.875686+0000 mon.a (mon.0) 192 : audit [DBG] from='client.? 192.168.123.100:0/1165237359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:25.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:24 vm03 bash[23405]: audit 2026-03-10T11:09:23.875686+0000 mon.a (mon.0) 192 : audit [DBG] from='client.? 192.168.123.100:0/1165237359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:25.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:24 vm03 bash[23405]: audit 2026-03-10T11:09:23.875686+0000 mon.a (mon.0) 192 : audit [DBG] from='client.? 192.168.123.100:0/1165237359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:26.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:25 vm00 bash[20758]: cluster 2026-03-10T11:09:24.398648+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:26.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:25 vm00 bash[20758]: cluster 2026-03-10T11:09:24.398648+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:26.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:25 vm03 bash[23405]: cluster 2026-03-10T11:09:24.398648+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:26.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:25 vm03 bash[23405]: cluster 2026-03-10T11:09:24.398648+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:27.694 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.b/config 2026-03-10T11:09:28.014 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled mgr update... 2026-03-10T11:09:28.039 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:27 vm03 bash[23405]: cluster 2026-03-10T11:09:26.398899+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:28.039 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:27 vm03 bash[23405]: cluster 2026-03-10T11:09:26.398899+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:28.079 DEBUG:teuthology.orchestra.run.vm03:mgr.b> sudo journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.b.service 2026-03-10T11:09:28.122 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T11:09:28.122 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:09:28.122 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T11:09:28.125 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:09:28.125 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-10T11:09:28.169 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-10T11:09:28.169 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-10T11:09:28.169 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-10T11:09:28.169 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-10T11:09:28.170 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-10T11:09:28.170 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T11:09:28.170 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T11:09:28.170 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 11:03:20.110816616 +0000 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 11:03:19.126816616 +0000 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 11:03:19.126816616 +0000 2026-03-10T11:09:28.213 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T11:09:28.213 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T11:09:28.231 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:27 vm00 bash[20758]: cluster 2026-03-10T11:09:26.398899+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:28.231 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:27 vm00 bash[20758]: cluster 2026-03-10T11:09:26.398899+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:28.237 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T11:09:28.237 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T11:09:28.237 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000151023 s, 3.4 MB/s 2026-03-10T11:09:28.238 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T11:09:28.282 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-10T11:09:28.317 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 11:03:20.122816616 +0000 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 11:03:19.134816616 +0000 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 11:03:19.134816616 +0000 2026-03-10T11:09:28.329 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T11:09:28.329 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T11:09:28.377 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T11:09:28.377 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T11:09:28.377 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000149109 s, 3.4 MB/s 2026-03-10T11:09:28.377 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T11:09:28.422 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 11:03:20.110816616 +0000 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 11:03:19.130816616 +0000 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 11:03:19.130816616 +0000 2026-03-10T11:09:28.469 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T11:09:28.469 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T11:09:28.516 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T11:09:28.516 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T11:09:28.516 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000141945 s, 3.6 MB/s 2026-03-10T11:09:28.517 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T11:09:28.562 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-10T11:09:28.592 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.592 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.592 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.592 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.592 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 11:03:20.118816616 +0000 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 11:03:19.130816616 +0000 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 11:03:19.130816616 +0000 2026-03-10T11:09:28.609 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T11:09:28.609 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T11:09:28.657 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T11:09:28.657 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T11:09:28.657 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000124983 s, 4.1 MB/s 2026-03-10T11:09:28.658 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T11:09:28.702 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:09:28.702 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T11:09:28.705 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:09:28.705 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-10T11:09:28.752 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-10T11:09:28.752 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-10T11:09:28.752 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-10T11:09:28.752 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-10T11:09:28.752 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-10T11:09:28.752 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T11:09:28.752 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T11:09:28.752 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 11:02:48.996432188 +0000 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 11:02:47.940432188 +0000 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 11:02:47.940432188 +0000 2026-03-10T11:09:28.796 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T11:09:28.796 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T11:09:28.852 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.852 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: Started Ceph mgr.b for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:09:28.853 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:28 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:28.861 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T11:09:28.861 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T11:09:28.861 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000173805 s, 2.9 MB/s 2026-03-10T11:09:28.862 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T11:09:28.914 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 11:02:49.004432188 +0000 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 11:02:47.964432188 +0000 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 11:02:47.964432188 +0000 2026-03-10T11:09:28.972 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T11:09:28.972 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T11:09:29.037 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T11:09:29.037 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T11:09:29.037 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.00503861 s, 102 kB/s 2026-03-10T11:09:29.038 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T11:09:29.096 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.009545+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24101 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.009545+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24101 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: cephadm 2026-03-10T11:09:28.010386+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm03=b;count:2 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: cephadm 2026-03-10T11:09:28.010386+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm03=b;count:2 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.014203+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.014203+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.015230+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.015230+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.016364+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.016364+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.016768+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.016768+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.020755+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.020755+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.021935+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.021935+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.023833+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.023833+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.026232+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.026232+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.026743+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.026743+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: cephadm 2026-03-10T11:09:28.027331+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm03 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: cephadm 2026-03-10T11:09:28.027331+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm03 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.791838+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.791838+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.796243+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.796243+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.131 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.800534+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.800534+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.804469+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.804469+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.815263+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[23405]: audit 2026-03-10T11:09:28.815263+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:28 vm03 bash[24120]: debug 2026-03-10T11:09:28.969+0000 7f2eb3cd9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[24120]: debug 2026-03-10T11:09:29.001+0000 7f2eb3cd9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:09:29.132 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[24120]: debug 2026-03-10T11:09:29.125+0000 7f2eb3cd9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 11:02:48.992432188 +0000 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 11:02:47.964432188 +0000 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 11:02:47.964432188 +0000 2026-03-10T11:09:29.134 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T11:09:29.134 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T11:09:29.184 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T11:09:29.184 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T11:09:29.184 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000168816 s, 3.0 MB/s 2026-03-10T11:09:29.185 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T11:09:29.230 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-10 11:02:49.000432188 +0000 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-10 11:02:47.964432188 +0000 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-10 11:02:47.964432188 +0000 2026-03-10T11:09:29.276 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-10T11:09:29.276 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T11:09:29.324 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-10T11:09:29.324 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-10T11:09:29.324 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000158867 s, 3.2 MB/s 2026-03-10T11:09:29.325 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T11:09:29.370 INFO:tasks.cephadm:Deploying osd.0 on vm00 with /dev/vde... 2026-03-10T11:09:29.370 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- lvm zap /dev/vde 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.009545+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24101 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.009545+0000 mgr.a (mgr.14150) 48 : audit [DBG] from='client.24101 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm03=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: cephadm 2026-03-10T11:09:28.010386+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm03=b;count:2 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: cephadm 2026-03-10T11:09:28.010386+0000 mgr.a (mgr.14150) 49 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm03=b;count:2 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.014203+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.014203+0000 mon.a (mon.0) 193 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.015230+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.015230+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.016364+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.016364+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.016768+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.016768+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.020755+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.020755+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.021935+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.021935+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.023833+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.023833+0000 mon.a (mon.0) 199 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.026232+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.026232+0000 mon.a (mon.0) 200 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.026743+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.026743+0000 mon.a (mon.0) 201 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: cephadm 2026-03-10T11:09:28.027331+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm03 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: cephadm 2026-03-10T11:09:28.027331+0000 mgr.a (mgr.14150) 50 : cephadm [INF] Deploying daemon mgr.b on vm03 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.791838+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.791838+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.796243+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.796243+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.800534+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.800534+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.804469+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.804469+0000 mon.a (mon.0) 205 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.815263+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.377 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:29 vm00 bash[20758]: audit 2026-03-10T11:09:28.815263+0000 mon.a (mon.0) 206 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:29.404 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[24120]: debug 2026-03-10T11:09:29.397+0000 7f2eb3cd9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[23405]: cluster 2026-03-10T11:09:28.399129+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[23405]: cluster 2026-03-10T11:09:28.399129+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[24120]: debug 2026-03-10T11:09:29.833+0000 7f2eb3cd9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:29 vm03 bash[24120]: debug 2026-03-10T11:09:29.917+0000 7f2eb3cd9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: from numpy import show_config as show_numpy_config 2026-03-10T11:09:30.194 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.049+0000 7f2eb3cd9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:09:30.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:30 vm00 bash[20758]: cluster 2026-03-10T11:09:28.399129+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:30.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:30 vm00 bash[20758]: cluster 2026-03-10T11:09:28.399129+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:30.567 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.189+0000 7f2eb3cd9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:09:30.567 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.225+0000 7f2eb3cd9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:09:30.567 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.261+0000 7f2eb3cd9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:09:30.567 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.301+0000 7f2eb3cd9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:09:30.567 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.353+0000 7f2eb3cd9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:09:31.050 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.785+0000 7f2eb3cd9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:09:31.050 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.821+0000 7f2eb3cd9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:09:31.050 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:30 vm03 bash[24120]: debug 2026-03-10T11:09:30.857+0000 7f2eb3cd9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:09:31.050 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.001+0000 7f2eb3cd9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:09:31.317 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.045+0000 7f2eb3cd9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:09:31.317 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.085+0000 7f2eb3cd9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:09:31.317 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.205+0000 7f2eb3cd9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:09:31.661 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.401+0000 7f2eb3cd9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:09:31.661 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.613+0000 7f2eb3cd9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:09:31.661 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.653+0000 7f2eb3cd9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:09:32.037 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.705+0000 7f2eb3cd9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:09:32.037 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:31 vm03 bash[24120]: debug 2026-03-10T11:09:31.889+0000 7f2eb3cd9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:09:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:32 vm03 bash[23405]: cluster 2026-03-10T11:09:30.399354+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:32 vm03 bash[23405]: cluster 2026-03-10T11:09:30.399354+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:32.317 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:09:32 vm03 bash[24120]: debug 2026-03-10T11:09:32.217+0000 7f2eb3cd9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:09:32.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:32 vm00 bash[20758]: cluster 2026-03-10T11:09:30.399354+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:32.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:32 vm00 bash[20758]: cluster 2026-03-10T11:09:30.399354+0000 mgr.a (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.227041+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.227041+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.227823+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.227823+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: cluster 2026-03-10T11:09:32.228352+0000 mon.a (mon.0) 207 : cluster [DBG] Standby manager daemon b started 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: cluster 2026-03-10T11:09:32.228352+0000 mon.a (mon.0) 207 : cluster [DBG] Standby manager daemon b started 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.229399+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.229399+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.230081+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.230081+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.952032+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:33 vm03 bash[23405]: audit 2026-03-10T11:09:32.952032+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.227041+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.227041+0000 mon.b (mon.1) 2 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.227823+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.227823+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: cluster 2026-03-10T11:09:32.228352+0000 mon.a (mon.0) 207 : cluster [DBG] Standby manager daemon b started 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: cluster 2026-03-10T11:09:32.228352+0000 mon.a (mon.0) 207 : cluster [DBG] Standby manager daemon b started 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.229399+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.229399+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.230081+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.230081+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.103:0/1353916496' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.952032+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:33.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:33 vm00 bash[20758]: audit 2026-03-10T11:09:32.952032+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:33.982 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: cluster 2026-03-10T11:09:32.399694+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: cluster 2026-03-10T11:09:32.399694+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: cluster 2026-03-10T11:09:33.058869+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: cluster 2026-03-10T11:09:33.058869+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.059507+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.059507+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.755903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.755903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.758998+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.758998+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.759552+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.759552+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.760500+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.760500+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.764118+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.764118+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.773418+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.773418+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.774165+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.774165+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.774841+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.290 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:34 vm00 bash[20758]: audit 2026-03-10T11:09:33.774841+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: cluster 2026-03-10T11:09:32.399694+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: cluster 2026-03-10T11:09:32.399694+0000 mgr.a (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: cluster 2026-03-10T11:09:33.058869+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: cluster 2026-03-10T11:09:33.058869+0000 mon.a (mon.0) 209 : cluster [DBG] mgrmap e13: a(active, since 52s), standbys: b 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.059507+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.059507+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.755903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.755903+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.758998+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.758998+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.759552+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.759552+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.760500+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.760500+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.764118+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.764118+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.773418+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.773418+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.774165+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.774165+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.774841+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:34 vm03 bash[23405]: audit 2026-03-10T11:09:33.774841+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:34.942 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:09:34.956 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch daemon add osd vm00:/dev/vde 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: cephadm 2026-03-10T11:09:33.773192+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: cephadm 2026-03-10T11:09:33.773192+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: cephadm 2026-03-10T11:09:33.775505+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: cephadm 2026-03-10T11:09:33.775505+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.204522+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.204522+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.208896+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.208896+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.210075+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.210075+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.211398+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.211398+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.212251+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.212251+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.216096+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:35 vm00 bash[20758]: audit 2026-03-10T11:09:34.216096+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: cephadm 2026-03-10T11:09:33.773192+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: cephadm 2026-03-10T11:09:33.773192+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: cephadm 2026-03-10T11:09:33.775505+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: cephadm 2026-03-10T11:09:33.775505+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.204522+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.204522+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.208896+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.208896+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.210075+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.210075+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:35.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.211398+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:35.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.211398+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:35.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.212251+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:35.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.212251+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:09:35.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.216096+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:35.318 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:35 vm03 bash[23405]: audit 2026-03-10T11:09:34.216096+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:36.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:36 vm03 bash[23405]: cluster 2026-03-10T11:09:34.399942+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:36.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:36 vm03 bash[23405]: cluster 2026-03-10T11:09:34.399942+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:36.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:36 vm00 bash[20758]: cluster 2026-03-10T11:09:34.399942+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:36.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:36 vm00 bash[20758]: cluster 2026-03-10T11:09:34.399942+0000 mgr.a (mgr.14150) 56 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:38.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:38 vm03 bash[23405]: cluster 2026-03-10T11:09:36.400208+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:38.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:38 vm03 bash[23405]: cluster 2026-03-10T11:09:36.400208+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:38 vm00 bash[20758]: cluster 2026-03-10T11:09:36.400208+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:38 vm00 bash[20758]: cluster 2026-03-10T11:09:36.400208+0000 mgr.a (mgr.14150) 57 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:39.618 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: cluster 2026-03-10T11:09:38.400495+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: cluster 2026-03-10T11:09:38.400495+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: audit 2026-03-10T11:09:39.953149+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: audit 2026-03-10T11:09:39.953149+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: audit 2026-03-10T11:09:39.955023+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: audit 2026-03-10T11:09:39.955023+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: audit 2026-03-10T11:09:39.955868+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:40.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:40 vm03 bash[23405]: audit 2026-03-10T11:09:39.955868+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: cluster 2026-03-10T11:09:38.400495+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: cluster 2026-03-10T11:09:38.400495+0000 mgr.a (mgr.14150) 58 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: audit 2026-03-10T11:09:39.953149+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: audit 2026-03-10T11:09:39.953149+0000 mon.a (mon.0) 225 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: audit 2026-03-10T11:09:39.955023+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: audit 2026-03-10T11:09:39.955023+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: audit 2026-03-10T11:09:39.955868+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:40 vm00 bash[20758]: audit 2026-03-10T11:09:39.955868+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:41.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:41 vm00 bash[20758]: audit 2026-03-10T11:09:39.951580+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14198 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:41.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:41 vm00 bash[20758]: audit 2026-03-10T11:09:39.951580+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14198 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:41.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:41 vm03 bash[23405]: audit 2026-03-10T11:09:39.951580+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14198 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:41.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:41 vm03 bash[23405]: audit 2026-03-10T11:09:39.951580+0000 mgr.a (mgr.14150) 59 : audit [DBG] from='client.14198 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:09:42.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:42 vm00 bash[20758]: cluster 2026-03-10T11:09:40.400731+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:42.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:42 vm00 bash[20758]: cluster 2026-03-10T11:09:40.400731+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:42.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:42 vm03 bash[23405]: cluster 2026-03-10T11:09:40.400731+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:42.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:42 vm03 bash[23405]: cluster 2026-03-10T11:09:40.400731+0000 mgr.a (mgr.14150) 60 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:44.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:44 vm00 bash[20758]: cluster 2026-03-10T11:09:42.401008+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:44.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:44 vm00 bash[20758]: cluster 2026-03-10T11:09:42.401008+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:44.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:44 vm03 bash[23405]: cluster 2026-03-10T11:09:42.401008+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:44.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:44 vm03 bash[23405]: cluster 2026-03-10T11:09:42.401008+0000 mgr.a (mgr.14150) 61 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: cluster 2026-03-10T11:09:44.401265+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: cluster 2026-03-10T11:09:44.401265+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: audit 2026-03-10T11:09:45.519677+0000 mon.a (mon.0) 228 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]: dispatch 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: audit 2026-03-10T11:09:45.519677+0000 mon.a (mon.0) 228 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]: dispatch 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: audit 2026-03-10T11:09:45.523183+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]': finished 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: audit 2026-03-10T11:09:45.523183+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]': finished 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: cluster 2026-03-10T11:09:45.526404+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: cluster 2026-03-10T11:09:45.526404+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: audit 2026-03-10T11:09:45.526866+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:09:46.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:46 vm00 bash[20758]: audit 2026-03-10T11:09:45.526866+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: cluster 2026-03-10T11:09:44.401265+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: cluster 2026-03-10T11:09:44.401265+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: audit 2026-03-10T11:09:45.519677+0000 mon.a (mon.0) 228 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]: dispatch 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: audit 2026-03-10T11:09:45.519677+0000 mon.a (mon.0) 228 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]: dispatch 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: audit 2026-03-10T11:09:45.523183+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]': finished 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: audit 2026-03-10T11:09:45.523183+0000 mon.a (mon.0) 229 : audit [INF] from='client.? 192.168.123.100:0/2751912517' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fcfd3f1d-8445-47f0-911e-aa2f6ea0dada"}]': finished 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: cluster 2026-03-10T11:09:45.526404+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: cluster 2026-03-10T11:09:45.526404+0000 mon.a (mon.0) 230 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: audit 2026-03-10T11:09:45.526866+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:09:46.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:46 vm03 bash[23405]: audit 2026-03-10T11:09:45.526866+0000 mon.a (mon.0) 231 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:09:47.481 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:47 vm00 bash[20758]: audit 2026-03-10T11:09:46.166731+0000 mon.a (mon.0) 232 : audit [DBG] from='client.? 192.168.123.100:0/3539216342' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:09:47.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:47 vm00 bash[20758]: audit 2026-03-10T11:09:46.166731+0000 mon.a (mon.0) 232 : audit [DBG] from='client.? 192.168.123.100:0/3539216342' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:09:47.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:47 vm03 bash[23405]: audit 2026-03-10T11:09:46.166731+0000 mon.a (mon.0) 232 : audit [DBG] from='client.? 192.168.123.100:0/3539216342' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:09:47.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:47 vm03 bash[23405]: audit 2026-03-10T11:09:46.166731+0000 mon.a (mon.0) 232 : audit [DBG] from='client.? 192.168.123.100:0/3539216342' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:09:48.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:48 vm00 bash[20758]: cluster 2026-03-10T11:09:46.401534+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:48.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:48 vm00 bash[20758]: cluster 2026-03-10T11:09:46.401534+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:48.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:48 vm03 bash[23405]: cluster 2026-03-10T11:09:46.401534+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:48.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:48 vm03 bash[23405]: cluster 2026-03-10T11:09:46.401534+0000 mgr.a (mgr.14150) 63 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:49.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:49 vm00 bash[20758]: cluster 2026-03-10T11:09:48.401742+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:49.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:49 vm00 bash[20758]: cluster 2026-03-10T11:09:48.401742+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:49.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:49 vm03 bash[23405]: cluster 2026-03-10T11:09:48.401742+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:49.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:49 vm03 bash[23405]: cluster 2026-03-10T11:09:48.401742+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:51.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:51 vm00 bash[20758]: cluster 2026-03-10T11:09:50.401972+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:51.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:51 vm00 bash[20758]: cluster 2026-03-10T11:09:50.401972+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:51.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:51 vm03 bash[23405]: cluster 2026-03-10T11:09:50.401972+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:51.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:51 vm03 bash[23405]: cluster 2026-03-10T11:09:50.401972+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:53.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:53 vm00 bash[20758]: cluster 2026-03-10T11:09:52.402190+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:53.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:53 vm00 bash[20758]: cluster 2026-03-10T11:09:52.402190+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:53.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:53 vm03 bash[23405]: cluster 2026-03-10T11:09:52.402190+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:53.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:53 vm03 bash[23405]: cluster 2026-03-10T11:09:52.402190+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: cluster 2026-03-10T11:09:54.402443+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: cluster 2026-03-10T11:09:54.402443+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: audit 2026-03-10T11:09:55.026659+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: audit 2026-03-10T11:09:55.026659+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: audit 2026-03-10T11:09:55.027303+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: audit 2026-03-10T11:09:55.027303+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: cephadm 2026-03-10T11:09:55.027750+0000 mgr.a (mgr.14150) 68 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T11:09:55.614 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 bash[20758]: cephadm 2026-03-10T11:09:55.027750+0000 mgr.a (mgr.14150) 68 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: cluster 2026-03-10T11:09:54.402443+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: cluster 2026-03-10T11:09:54.402443+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: audit 2026-03-10T11:09:55.026659+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: audit 2026-03-10T11:09:55.026659+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: audit 2026-03-10T11:09:55.027303+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: audit 2026-03-10T11:09:55.027303+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: cephadm 2026-03-10T11:09:55.027750+0000 mgr.a (mgr.14150) 68 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T11:09:55.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:55 vm03 bash[23405]: cephadm 2026-03-10T11:09:55.027750+0000 mgr.a (mgr.14150) 68 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T11:09:55.871 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:55 vm00 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:56.124 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:56.124 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:09:55 vm00 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:56.124 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 11:09:56 vm00 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:09:56.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 bash[20758]: audit 2026-03-10T11:09:56.147586+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:56.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 bash[20758]: audit 2026-03-10T11:09:56.147586+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:56.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 bash[20758]: audit 2026-03-10T11:09:56.152283+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 bash[20758]: audit 2026-03-10T11:09:56.152283+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 bash[20758]: audit 2026-03-10T11:09:56.156362+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:56 vm00 bash[20758]: audit 2026-03-10T11:09:56.156362+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:56 vm03 bash[23405]: audit 2026-03-10T11:09:56.147586+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:56.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:56 vm03 bash[23405]: audit 2026-03-10T11:09:56.147586+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:09:56.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:56 vm03 bash[23405]: audit 2026-03-10T11:09:56.152283+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:56 vm03 bash[23405]: audit 2026-03-10T11:09:56.152283+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:56 vm03 bash[23405]: audit 2026-03-10T11:09:56.156362+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:56.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:56 vm03 bash[23405]: audit 2026-03-10T11:09:56.156362+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:09:57.703 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:57 vm00 bash[20758]: cluster 2026-03-10T11:09:56.402678+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:57.703 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:57 vm00 bash[20758]: cluster 2026-03-10T11:09:56.402678+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:57 vm03 bash[23405]: cluster 2026-03-10T11:09:56.402678+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:57 vm03 bash[23405]: cluster 2026-03-10T11:09:56.402678+0000 mgr.a (mgr.14150) 69 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:59.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:59 vm00 bash[20758]: cluster 2026-03-10T11:09:58.402889+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:59.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:09:59 vm00 bash[20758]: cluster 2026-03-10T11:09:58.402889+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:59.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:59 vm03 bash[23405]: cluster 2026-03-10T11:09:58.402889+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:09:59.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:09:59 vm03 bash[23405]: cluster 2026-03-10T11:09:58.402889+0000 mgr.a (mgr.14150) 70 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:00.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:00 vm03 bash[23405]: cluster 2026-03-10T11:10:00.000138+0000 mon.a (mon.0) 238 : cluster [INF] overall HEALTH_OK 2026-03-10T11:10:00.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:00 vm03 bash[23405]: cluster 2026-03-10T11:10:00.000138+0000 mon.a (mon.0) 238 : cluster [INF] overall HEALTH_OK 2026-03-10T11:10:00.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:00 vm03 bash[23405]: audit 2026-03-10T11:10:00.024092+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:10:00.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:00 vm03 bash[23405]: audit 2026-03-10T11:10:00.024092+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:10:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:00 vm00 bash[20758]: cluster 2026-03-10T11:10:00.000138+0000 mon.a (mon.0) 238 : cluster [INF] overall HEALTH_OK 2026-03-10T11:10:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:00 vm00 bash[20758]: cluster 2026-03-10T11:10:00.000138+0000 mon.a (mon.0) 238 : cluster [INF] overall HEALTH_OK 2026-03-10T11:10:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:00 vm00 bash[20758]: audit 2026-03-10T11:10:00.024092+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:10:00.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:00 vm00 bash[20758]: audit 2026-03-10T11:10:00.024092+0000 mon.a (mon.0) 239 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: cluster 2026-03-10T11:10:00.403140+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: cluster 2026-03-10T11:10:00.403140+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: audit 2026-03-10T11:10:00.488837+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: audit 2026-03-10T11:10:00.488837+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: cluster 2026-03-10T11:10:00.492179+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: cluster 2026-03-10T11:10:00.492179+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: audit 2026-03-10T11:10:00.492493+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: audit 2026-03-10T11:10:00.492493+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: audit 2026-03-10T11:10:00.492630+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T11:10:01.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:01 vm03 bash[23405]: audit 2026-03-10T11:10:00.492630+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: cluster 2026-03-10T11:10:00.403140+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: cluster 2026-03-10T11:10:00.403140+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: audit 2026-03-10T11:10:00.488837+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: audit 2026-03-10T11:10:00.488837+0000 mon.a (mon.0) 240 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: cluster 2026-03-10T11:10:00.492179+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: cluster 2026-03-10T11:10:00.492179+0000 mon.a (mon.0) 241 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: audit 2026-03-10T11:10:00.492493+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: audit 2026-03-10T11:10:00.492493+0000 mon.a (mon.0) 242 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: audit 2026-03-10T11:10:00.492630+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T11:10:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:01 vm00 bash[20758]: audit 2026-03-10T11:10:00.492630+0000 mon.a (mon.0) 243 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:01.491103+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:01.491103+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: cluster 2026-03-10T11:10:01.493634+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: cluster 2026-03-10T11:10:01.493634+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:01.494214+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:01.494214+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:01.501718+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:01.501718+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:02.307928+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:02.307928+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:02.324766+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.734 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:02 vm00 bash[20758]: audit 2026-03-10T11:10:02.324766+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:01.491103+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:01.491103+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: cluster 2026-03-10T11:10:01.493634+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: cluster 2026-03-10T11:10:01.493634+0000 mon.a (mon.0) 245 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:01.494214+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:01.494214+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:01.501718+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:01.501718+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:02.307928+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:02.307928+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:02.324766+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:02.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:02 vm03 bash[23405]: audit 2026-03-10T11:10:02.324766+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:03.528 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 0 on host 'vm00' 2026-03-10T11:10:03.631 DEBUG:teuthology.orchestra.run.vm00:osd.0> sudo journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@osd.0.service 2026-03-10T11:10:03.632 INFO:tasks.cephadm:Deploying osd.1 on vm03 with /dev/vde... 2026-03-10T11:10:03.632 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- lvm zap /dev/vde 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: cluster 2026-03-10T11:10:01.007585+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: cluster 2026-03-10T11:10:01.007585+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: cluster 2026-03-10T11:10:01.007648+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: cluster 2026-03-10T11:10:01.007648+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: cluster 2026-03-10T11:10:02.403410+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: cluster 2026-03-10T11:10:02.403410+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.500044+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.500044+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.765233+0000 mon.a (mon.0) 251 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.765233+0000 mon.a (mon.0) 251 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.772997+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.772997+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.773501+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.773501+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.777700+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:03.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:03 vm00 bash[20758]: audit 2026-03-10T11:10:02.777700+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: cluster 2026-03-10T11:10:01.007585+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: cluster 2026-03-10T11:10:01.007585+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: cluster 2026-03-10T11:10:01.007648+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: cluster 2026-03-10T11:10:01.007648+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: cluster 2026-03-10T11:10:02.403410+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: cluster 2026-03-10T11:10:02.403410+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.500044+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.500044+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.765233+0000 mon.a (mon.0) 251 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.765233+0000 mon.a (mon.0) 251 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141]' entity='osd.0' 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.772997+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.772997+0000 mon.a (mon.0) 252 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.773501+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.773501+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.777700+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:03.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:03 vm03 bash[23405]: audit 2026-03-10T11:10:02.777700+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.501523+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.501523+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.517188+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.517188+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.520845+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.520845+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.523901+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.523901+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: cluster 2026-03-10T11:10:03.782255+0000 mon.a (mon.0) 259 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141] boot 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: cluster 2026-03-10T11:10:03.782255+0000 mon.a (mon.0) 259 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141] boot 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: cluster 2026-03-10T11:10:03.782364+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: cluster 2026-03-10T11:10:03.782364+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.782613+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:04 vm03 bash[23405]: audit 2026-03-10T11:10:03.782613+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.501523+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.501523+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.517188+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.517188+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.520845+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.520845+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.523901+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.523901+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: cluster 2026-03-10T11:10:03.782255+0000 mon.a (mon.0) 259 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141] boot 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: cluster 2026-03-10T11:10:03.782255+0000 mon.a (mon.0) 259 : cluster [INF] osd.0 [v2:192.168.123.100:6802/2018763141,v1:192.168.123.100:6803/2018763141] boot 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: cluster 2026-03-10T11:10:03.782364+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: cluster 2026-03-10T11:10:03.782364+0000 mon.a (mon.0) 260 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.782613+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:04.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:04 vm00 bash[20758]: audit 2026-03-10T11:10:03.782613+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:10:06.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:05 vm00 bash[20758]: cluster 2026-03-10T11:10:04.403759+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:06.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:05 vm00 bash[20758]: cluster 2026-03-10T11:10:04.403759+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:06.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:05 vm00 bash[20758]: cluster 2026-03-10T11:10:04.974391+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T11:10:06.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:05 vm00 bash[20758]: cluster 2026-03-10T11:10:04.974391+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T11:10:06.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:05 vm03 bash[23405]: cluster 2026-03-10T11:10:04.403759+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:06.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:05 vm03 bash[23405]: cluster 2026-03-10T11:10:04.403759+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:10:06.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:05 vm03 bash[23405]: cluster 2026-03-10T11:10:04.974391+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T11:10:06.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:05 vm03 bash[23405]: cluster 2026-03-10T11:10:04.974391+0000 mon.a (mon.0) 262 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T11:10:08.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:07 vm00 bash[20758]: cluster 2026-03-10T11:10:06.404043+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:08.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:07 vm00 bash[20758]: cluster 2026-03-10T11:10:06.404043+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:08.256 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.b/config 2026-03-10T11:10:08.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:07 vm03 bash[23405]: cluster 2026-03-10T11:10:06.404043+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:08.270 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:07 vm03 bash[23405]: cluster 2026-03-10T11:10:06.404043+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:09.497 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-10T11:10:09.512 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph orch daemon add osd vm03:/dev/vde 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: cluster 2026-03-10T11:10:08.404270+0000 mgr.a (mgr.14150) 75 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: cluster 2026-03-10T11:10:08.404270+0000 mgr.a (mgr.14150) 75 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.530733+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.530733+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.536552+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.536552+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.537421+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.537421+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.538607+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.538607+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.539084+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.539084+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.543186+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:10 vm00 bash[20758]: audit 2026-03-10T11:10:09.543186+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: cluster 2026-03-10T11:10:08.404270+0000 mgr.a (mgr.14150) 75 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: cluster 2026-03-10T11:10:08.404270+0000 mgr.a (mgr.14150) 75 : cluster [DBG] pgmap v43: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.530733+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.530733+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.536552+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.536552+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.537421+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.537421+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.538607+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.538607+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.539084+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.539084+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.543186+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:10.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:10 vm03 bash[23405]: audit 2026-03-10T11:10:09.543186+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:11.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:11 vm00 bash[20758]: cephadm 2026-03-10T11:10:09.518485+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T11:10:11.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:11 vm00 bash[20758]: cephadm 2026-03-10T11:10:09.518485+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T11:10:11.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:11 vm00 bash[20758]: cephadm 2026-03-10T11:10:09.537809+0000 mgr.a (mgr.14150) 77 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T11:10:11.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:11 vm00 bash[20758]: cephadm 2026-03-10T11:10:09.537809+0000 mgr.a (mgr.14150) 77 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T11:10:11.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:11 vm00 bash[20758]: cephadm 2026-03-10T11:10:09.538251+0000 mgr.a (mgr.14150) 78 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:11.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:11 vm00 bash[20758]: cephadm 2026-03-10T11:10:09.538251+0000 mgr.a (mgr.14150) 78 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:11.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:11 vm03 bash[23405]: cephadm 2026-03-10T11:10:09.518485+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T11:10:11.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:11 vm03 bash[23405]: cephadm 2026-03-10T11:10:09.518485+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T11:10:11.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:11 vm03 bash[23405]: cephadm 2026-03-10T11:10:09.537809+0000 mgr.a (mgr.14150) 77 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T11:10:11.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:11 vm03 bash[23405]: cephadm 2026-03-10T11:10:09.537809+0000 mgr.a (mgr.14150) 77 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T11:10:11.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:11 vm03 bash[23405]: cephadm 2026-03-10T11:10:09.538251+0000 mgr.a (mgr.14150) 78 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:11.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:11 vm03 bash[23405]: cephadm 2026-03-10T11:10:09.538251+0000 mgr.a (mgr.14150) 78 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:12.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:12 vm03 bash[23405]: cluster 2026-03-10T11:10:10.404571+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:12.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:12 vm03 bash[23405]: cluster 2026-03-10T11:10:10.404571+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:12.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:12 vm00 bash[20758]: cluster 2026-03-10T11:10:10.404571+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:12.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:12 vm00 bash[20758]: cluster 2026-03-10T11:10:10.404571+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:13.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:13 vm03 bash[23405]: cluster 2026-03-10T11:10:12.404832+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:13.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:13 vm03 bash[23405]: cluster 2026-03-10T11:10:12.404832+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:13.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:13 vm00 bash[20758]: cluster 2026-03-10T11:10:12.404832+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:13.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:13 vm00 bash[20758]: cluster 2026-03-10T11:10:12.404832+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:14.133 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.b/config 2026-03-10T11:10:14.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:14 vm03 bash[23405]: audit 2026-03-10T11:10:14.407319+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:10:14.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:14 vm03 bash[23405]: audit 2026-03-10T11:10:14.407319+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:10:14.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:14 vm03 bash[23405]: audit 2026-03-10T11:10:14.408626+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:10:14.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:14 vm03 bash[23405]: audit 2026-03-10T11:10:14.408626+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:10:14.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:14 vm03 bash[23405]: audit 2026-03-10T11:10:14.409058+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:14.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:14 vm03 bash[23405]: audit 2026-03-10T11:10:14.409058+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:14.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:14 vm00 bash[20758]: audit 2026-03-10T11:10:14.407319+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:10:14.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:14 vm00 bash[20758]: audit 2026-03-10T11:10:14.407319+0000 mon.a (mon.0) 269 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:10:14.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:14 vm00 bash[20758]: audit 2026-03-10T11:10:14.408626+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:10:14.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:14 vm00 bash[20758]: audit 2026-03-10T11:10:14.408626+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:10:14.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:14 vm00 bash[20758]: audit 2026-03-10T11:10:14.409058+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:14.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:14 vm00 bash[20758]: audit 2026-03-10T11:10:14.409058+0000 mon.a (mon.0) 271 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:15.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:15 vm03 bash[23405]: cluster 2026-03-10T11:10:14.405079+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:15.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:15 vm03 bash[23405]: cluster 2026-03-10T11:10:14.405079+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:15.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:15 vm03 bash[23405]: audit 2026-03-10T11:10:14.405915+0000 mgr.a (mgr.14150) 82 : audit [DBG] from='client.24117 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:10:15.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:15 vm03 bash[23405]: audit 2026-03-10T11:10:14.405915+0000 mgr.a (mgr.14150) 82 : audit [DBG] from='client.24117 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:10:15.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:15 vm00 bash[20758]: cluster 2026-03-10T11:10:14.405079+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:15.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:15 vm00 bash[20758]: cluster 2026-03-10T11:10:14.405079+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:15.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:15 vm00 bash[20758]: audit 2026-03-10T11:10:14.405915+0000 mgr.a (mgr.14150) 82 : audit [DBG] from='client.24117 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:10:15.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:15 vm00 bash[20758]: audit 2026-03-10T11:10:14.405915+0000 mgr.a (mgr.14150) 82 : audit [DBG] from='client.24117 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:10:18.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:17 vm00 bash[20758]: cluster 2026-03-10T11:10:16.405270+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:18.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:17 vm00 bash[20758]: cluster 2026-03-10T11:10:16.405270+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:17 vm03 bash[23405]: cluster 2026-03-10T11:10:16.405270+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:18.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:17 vm03 bash[23405]: cluster 2026-03-10T11:10:16.405270+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:20.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: cluster 2026-03-10T11:10:18.405510+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: cluster 2026-03-10T11:10:18.405510+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.057038+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.103:0/1409233393' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.057038+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.103:0/1409233393' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.057949+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.057949+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.060719+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]': finished 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.060719+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]': finished 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: cluster 2026-03-10T11:10:20.063932+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: cluster 2026-03-10T11:10:20.063932+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.064061+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:20.498 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:20 vm00 bash[20758]: audit 2026-03-10T11:10:20.064061+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: cluster 2026-03-10T11:10:18.405510+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: cluster 2026-03-10T11:10:18.405510+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.057038+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.103:0/1409233393' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.057038+0000 mon.b (mon.1) 6 : audit [INF] from='client.? 192.168.123.103:0/1409233393' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.057949+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.057949+0000 mon.a (mon.0) 272 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]: dispatch 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.060719+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]': finished 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.060719+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "df907036-2766-4eed-a794-6d9ac0e0f928"}]': finished 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: cluster 2026-03-10T11:10:20.063932+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: cluster 2026-03-10T11:10:20.063932+0000 mon.a (mon.0) 274 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.064061+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:20.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:20 vm03 bash[23405]: audit 2026-03-10T11:10:20.064061+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:21.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:21 vm00 bash[20758]: cluster 2026-03-10T11:10:20.405736+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:21.732 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:21 vm00 bash[20758]: cluster 2026-03-10T11:10:20.405736+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:21.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:21 vm03 bash[23405]: cluster 2026-03-10T11:10:20.405736+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:21.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:21 vm03 bash[23405]: cluster 2026-03-10T11:10:20.405736+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:22.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:22 vm03 bash[23405]: audit 2026-03-10T11:10:21.707483+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.103:0/1604291723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:10:22.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:22 vm03 bash[23405]: audit 2026-03-10T11:10:21.707483+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.103:0/1604291723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:10:22.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:22 vm00 bash[20758]: audit 2026-03-10T11:10:21.707483+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.103:0/1604291723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:10:22.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:22 vm00 bash[20758]: audit 2026-03-10T11:10:21.707483+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.103:0/1604291723' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:10:23.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:23 vm00 bash[20758]: cluster 2026-03-10T11:10:22.405980+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:23.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:23 vm00 bash[20758]: cluster 2026-03-10T11:10:22.405980+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:24.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:23 vm03 bash[23405]: cluster 2026-03-10T11:10:22.405980+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:24.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:23 vm03 bash[23405]: cluster 2026-03-10T11:10:22.405980+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:25.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:25 vm03 bash[23405]: cluster 2026-03-10T11:10:24.406229+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:25.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:25 vm03 bash[23405]: cluster 2026-03-10T11:10:24.406229+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:25.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:25 vm00 bash[20758]: cluster 2026-03-10T11:10:24.406229+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:25.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:25 vm00 bash[20758]: cluster 2026-03-10T11:10:24.406229+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:27.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:27 vm00 bash[20758]: cluster 2026-03-10T11:10:26.406453+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:27.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:27 vm00 bash[20758]: cluster 2026-03-10T11:10:26.406453+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:28.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:27 vm03 bash[23405]: cluster 2026-03-10T11:10:26.406453+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:28.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:27 vm03 bash[23405]: cluster 2026-03-10T11:10:26.406453+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:29.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:29 vm00 bash[20758]: cluster 2026-03-10T11:10:28.406693+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:29.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:29 vm00 bash[20758]: cluster 2026-03-10T11:10:28.406693+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:30.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:29 vm03 bash[23405]: cluster 2026-03-10T11:10:28.406693+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:30.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:29 vm03 bash[23405]: cluster 2026-03-10T11:10:28.406693+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:31.522 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:10:31.522 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:10:31 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:10:31.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:10:31.817 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:10:31 vm03 systemd[1]: /etc/systemd/system/ceph-507c5972-1c71-11f1-afff-ff6f68248060@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: cluster 2026-03-10T11:10:30.406912+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: cluster 2026-03-10T11:10:30.406912+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: audit 2026-03-10T11:10:30.704684+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: audit 2026-03-10T11:10:30.704684+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: audit 2026-03-10T11:10:30.705196+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: audit 2026-03-10T11:10:30.705196+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: cephadm 2026-03-10T11:10:30.705574+0000 mgr.a (mgr.14150) 91 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-10T11:10:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:31 vm00 bash[20758]: cephadm 2026-03-10T11:10:30.705574+0000 mgr.a (mgr.14150) 91 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: cluster 2026-03-10T11:10:30.406912+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: cluster 2026-03-10T11:10:30.406912+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: audit 2026-03-10T11:10:30.704684+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: audit 2026-03-10T11:10:30.704684+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: audit 2026-03-10T11:10:30.705196+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: audit 2026-03-10T11:10:30.705196+0000 mon.a (mon.0) 277 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: cephadm 2026-03-10T11:10:30.705574+0000 mgr.a (mgr.14150) 91 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-10T11:10:32.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:31 vm03 bash[23405]: cephadm 2026-03-10T11:10:30.705574+0000 mgr.a (mgr.14150) 91 : cephadm [INF] Deploying daemon osd.1 on vm03 2026-03-10T11:10:33.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:32 vm00 bash[20758]: audit 2026-03-10T11:10:32.003693+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:33.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:32 vm00 bash[20758]: audit 2026-03-10T11:10:32.003693+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:33.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:32 vm00 bash[20758]: audit 2026-03-10T11:10:32.396231+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:32 vm00 bash[20758]: audit 2026-03-10T11:10:32.396231+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:32 vm00 bash[20758]: audit 2026-03-10T11:10:32.540704+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:32 vm00 bash[20758]: audit 2026-03-10T11:10:32.540704+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:32 vm03 bash[23405]: audit 2026-03-10T11:10:32.003693+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:32 vm03 bash[23405]: audit 2026-03-10T11:10:32.003693+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:32 vm03 bash[23405]: audit 2026-03-10T11:10:32.396231+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:32 vm03 bash[23405]: audit 2026-03-10T11:10:32.396231+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:32 vm03 bash[23405]: audit 2026-03-10T11:10:32.540704+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:33.317 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:32 vm03 bash[23405]: audit 2026-03-10T11:10:32.540704+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:34.256 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:34 vm03 bash[23405]: cluster 2026-03-10T11:10:32.407115+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:34.256 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:34 vm03 bash[23405]: cluster 2026-03-10T11:10:32.407115+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:34.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:34 vm00 bash[20758]: cluster 2026-03-10T11:10:32.407115+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:34.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:34 vm00 bash[20758]: cluster 2026-03-10T11:10:32.407115+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:36.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:36 vm00 bash[20758]: cluster 2026-03-10T11:10:34.407422+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:36.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:36 vm00 bash[20758]: cluster 2026-03-10T11:10:34.407422+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:36.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:36 vm03 bash[23405]: cluster 2026-03-10T11:10:34.407422+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:36.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:36 vm03 bash[23405]: cluster 2026-03-10T11:10:34.407422+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:37.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:37 vm00 bash[20758]: audit 2026-03-10T11:10:36.098155+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:37 vm00 bash[20758]: audit 2026-03-10T11:10:36.098155+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:37 vm00 bash[20758]: audit 2026-03-10T11:10:36.098785+0000 mon.a (mon.0) 281 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:37 vm00 bash[20758]: audit 2026-03-10T11:10:36.098785+0000 mon.a (mon.0) 281 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:37 vm03 bash[23405]: audit 2026-03-10T11:10:36.098155+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:37 vm03 bash[23405]: audit 2026-03-10T11:10:36.098155+0000 mon.b (mon.1) 8 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:37 vm03 bash[23405]: audit 2026-03-10T11:10:36.098785+0000 mon.a (mon.0) 281 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:37.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:37 vm03 bash[23405]: audit 2026-03-10T11:10:36.098785+0000 mon.a (mon.0) 281 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:10:38.400 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: cluster 2026-03-10T11:10:36.408085+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: cluster 2026-03-10T11:10:36.408085+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.087531+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.087531+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: cluster 2026-03-10T11:10:37.089213+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: cluster 2026-03-10T11:10:37.089213+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.089321+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.089321+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.090260+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.090260+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.090863+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.401 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:38 vm03 bash[23405]: audit 2026-03-10T11:10:37.090863+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: cluster 2026-03-10T11:10:36.408085+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: cluster 2026-03-10T11:10:36.408085+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.087531+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.087531+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: cluster 2026-03-10T11:10:37.089213+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: cluster 2026-03-10T11:10:37.089213+0000 mon.a (mon.0) 283 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.089321+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.089321+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.090260+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.090260+0000 mon.b (mon.1) 9 : audit [INF] from='osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.090863+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:38.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:38 vm00 bash[20758]: audit 2026-03-10T11:10:37.090863+0000 mon.a (mon.0) 285 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: cluster 2026-03-10T11:10:37.118624+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: cluster 2026-03-10T11:10:37.118624+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: cluster 2026-03-10T11:10:37.118690+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: cluster 2026-03-10T11:10:37.118690+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.090245+0000 mon.a (mon.0) 286 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.090245+0000 mon.a (mon.0) 286 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: cluster 2026-03-10T11:10:38.093619+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: cluster 2026-03-10T11:10:38.093619+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.094753+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.094753+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.095497+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.095497+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.591254+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.591254+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.595149+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.595149+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.596134+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.596134+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.596627+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.596627+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.601284+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:38.601284+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:39.095718+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.364 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:39 vm03 bash[23405]: audit 2026-03-10T11:10:39.095718+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: cluster 2026-03-10T11:10:37.118624+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: cluster 2026-03-10T11:10:37.118624+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: cluster 2026-03-10T11:10:37.118690+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: cluster 2026-03-10T11:10:37.118690+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.090245+0000 mon.a (mon.0) 286 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.090245+0000 mon.a (mon.0) 286 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: cluster 2026-03-10T11:10:38.093619+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: cluster 2026-03-10T11:10:38.093619+0000 mon.a (mon.0) 287 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.094753+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.094753+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.095497+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.095497+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.591254+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.591254+0000 mon.a (mon.0) 290 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.595149+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.595149+0000 mon.a (mon.0) 291 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.596134+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.596134+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.596627+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.596627+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.601284+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:38.601284+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:39.095718+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:39 vm00 bash[20758]: audit 2026-03-10T11:10:39.095718+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:39.535 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 1 on host 'vm03' 2026-03-10T11:10:39.599 DEBUG:teuthology.orchestra.run.vm03:osd.1> sudo journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@osd.1.service 2026-03-10T11:10:39.600 INFO:tasks.cephadm:Waiting for 2 OSDs to come up... 2026-03-10T11:10:39.600 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd stat -f json 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: cluster 2026-03-10T11:10:38.408409+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: cluster 2026-03-10T11:10:38.408409+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: cluster 2026-03-10T11:10:39.102825+0000 mon.a (mon.0) 296 : cluster [INF] osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837] boot 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: cluster 2026-03-10T11:10:39.102825+0000 mon.a (mon.0) 296 : cluster [INF] osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837] boot 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: cluster 2026-03-10T11:10:39.102938+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: cluster 2026-03-10T11:10:39.102938+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.104035+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.104035+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.521495+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.521495+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.527383+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.527383+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.534106+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.482 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:40 vm00 bash[20758]: audit 2026-03-10T11:10:39.534106+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: cluster 2026-03-10T11:10:38.408409+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: cluster 2026-03-10T11:10:38.408409+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: cluster 2026-03-10T11:10:39.102825+0000 mon.a (mon.0) 296 : cluster [INF] osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837] boot 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: cluster 2026-03-10T11:10:39.102825+0000 mon.a (mon.0) 296 : cluster [INF] osd.1 [v2:192.168.123.103:6800/245154837,v1:192.168.123.103:6801/245154837] boot 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: cluster 2026-03-10T11:10:39.102938+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: cluster 2026-03-10T11:10:39.102938+0000 mon.a (mon.0) 297 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.104035+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.104035+0000 mon.a (mon.0) 298 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.521495+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.521495+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.527383+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.527383+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.534106+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:40.567 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:40 vm03 bash[23405]: audit 2026-03-10T11:10:39.534106+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:41.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:41 vm03 bash[23405]: cluster 2026-03-10T11:10:40.408620+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:41.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:41 vm03 bash[23405]: cluster 2026-03-10T11:10:40.408620+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:41.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:41 vm03 bash[23405]: cluster 2026-03-10T11:10:40.543577+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:10:41.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:41 vm03 bash[23405]: cluster 2026-03-10T11:10:40.543577+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:10:41.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:41 vm00 bash[20758]: cluster 2026-03-10T11:10:40.408620+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:41.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:41 vm00 bash[20758]: cluster 2026-03-10T11:10:40.408620+0000 mgr.a (mgr.14150) 96 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:41.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:41 vm00 bash[20758]: cluster 2026-03-10T11:10:40.543577+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:10:41.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:41 vm00 bash[20758]: cluster 2026-03-10T11:10:40.543577+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:10:43.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:43 vm03 bash[23405]: cluster 2026-03-10T11:10:42.408879+0000 mgr.a (mgr.14150) 97 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:43.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:43 vm03 bash[23405]: cluster 2026-03-10T11:10:42.408879+0000 mgr.a (mgr.14150) 97 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:43.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:43 vm00 bash[20758]: cluster 2026-03-10T11:10:42.408879+0000 mgr.a (mgr.14150) 97 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:43.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:43 vm00 bash[20758]: cluster 2026-03-10T11:10:42.408879+0000 mgr.a (mgr.14150) 97 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:44.219 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:10:44.475 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:10:44.535 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1773141039,"num_in_osds":2,"osd_in_since":1773141020,"num_remapped_pgs":0} 2026-03-10T11:10:44.535 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd dump --format=json 2026-03-10T11:10:44.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:44 vm03 bash[23405]: audit 2026-03-10T11:10:44.475284+0000 mon.a (mon.0) 303 : audit [DBG] from='client.? 192.168.123.100:0/2173081564' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:10:44.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:44 vm03 bash[23405]: audit 2026-03-10T11:10:44.475284+0000 mon.a (mon.0) 303 : audit [DBG] from='client.? 192.168.123.100:0/2173081564' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:10:44.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:44 vm00 bash[20758]: audit 2026-03-10T11:10:44.475284+0000 mon.a (mon.0) 303 : audit [DBG] from='client.? 192.168.123.100:0/2173081564' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:10:44.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:44 vm00 bash[20758]: audit 2026-03-10T11:10:44.475284+0000 mon.a (mon.0) 303 : audit [DBG] from='client.? 192.168.123.100:0/2173081564' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: cluster 2026-03-10T11:10:44.409137+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: cluster 2026-03-10T11:10:44.409137+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.189021+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.189021+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.193064+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.193064+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.193915+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.193915+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.194974+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.194974+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.195390+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.195390+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.199533+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:45 vm03 bash[23405]: audit 2026-03-10T11:10:45.199533+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: cluster 2026-03-10T11:10:44.409137+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: cluster 2026-03-10T11:10:44.409137+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.189021+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.189021+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.193064+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.193064+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.193915+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.193915+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.194974+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.194974+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.195390+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.195390+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.199533+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:45.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:45 vm00 bash[20758]: audit 2026-03-10T11:10:45.199533+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.100:0/306309575' entity='mgr.a' 2026-03-10T11:10:46.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:46 vm03 bash[23405]: cephadm 2026-03-10T11:10:45.183851+0000 mgr.a (mgr.14150) 99 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T11:10:46.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:46 vm03 bash[23405]: cephadm 2026-03-10T11:10:45.183851+0000 mgr.a (mgr.14150) 99 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T11:10:46.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:46 vm03 bash[23405]: cephadm 2026-03-10T11:10:45.194260+0000 mgr.a (mgr.14150) 100 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T11:10:46.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:46 vm03 bash[23405]: cephadm 2026-03-10T11:10:45.194260+0000 mgr.a (mgr.14150) 100 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T11:10:46.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:46 vm03 bash[23405]: cephadm 2026-03-10T11:10:45.194649+0000 mgr.a (mgr.14150) 101 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:46.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:46 vm03 bash[23405]: cephadm 2026-03-10T11:10:45.194649+0000 mgr.a (mgr.14150) 101 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:46.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:46 vm00 bash[20758]: cephadm 2026-03-10T11:10:45.183851+0000 mgr.a (mgr.14150) 99 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T11:10:46.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:46 vm00 bash[20758]: cephadm 2026-03-10T11:10:45.183851+0000 mgr.a (mgr.14150) 99 : cephadm [INF] Detected new or changed devices on vm03 2026-03-10T11:10:46.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:46 vm00 bash[20758]: cephadm 2026-03-10T11:10:45.194260+0000 mgr.a (mgr.14150) 100 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T11:10:46.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:46 vm00 bash[20758]: cephadm 2026-03-10T11:10:45.194260+0000 mgr.a (mgr.14150) 100 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-10T11:10:46.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:46 vm00 bash[20758]: cephadm 2026-03-10T11:10:45.194649+0000 mgr.a (mgr.14150) 101 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:46.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:46 vm00 bash[20758]: cephadm 2026-03-10T11:10:45.194649+0000 mgr.a (mgr.14150) 101 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T11:10:47.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:47 vm00 bash[20758]: cluster 2026-03-10T11:10:46.409357+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:47.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:47 vm00 bash[20758]: cluster 2026-03-10T11:10:46.409357+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:48.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:47 vm03 bash[23405]: cluster 2026-03-10T11:10:46.409357+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:48.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:47 vm03 bash[23405]: cluster 2026-03-10T11:10:46.409357+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:48.230 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:10:48.486 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:10:48.486 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":14,"fsid":"507c5972-1c71-11f1-afff-ff6f68248060","created":"2026-03-10T11:08:18.119378+0000","modified":"2026-03-10T11:10:40.532603+0000","last_up_change":"2026-03-10T11:10:39.096755+0000","last_in_change":"2026-03-10T11:10:20.058227+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"fcfd3f1d-8445-47f0-911e-aa2f6ea0dada","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6803","nonce":2018763141}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6805","nonce":2018763141}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6809","nonce":2018763141}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6807","nonce":2018763141}]},"public_addr":"192.168.123.100:6803/2018763141","cluster_addr":"192.168.123.100:6805/2018763141","heartbeat_back_addr":"192.168.123.100:6809/2018763141","heartbeat_front_addr":"192.168.123.100:6807/2018763141","state":["exists","up"]},{"osd":1,"uuid":"df907036-2766-4eed-a794-6d9ac0e0f928","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6801","nonce":245154837}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6803","nonce":245154837}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6807","nonce":245154837}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6805","nonce":245154837}]},"public_addr":"192.168.123.103:6801/245154837","cluster_addr":"192.168.123.103:6803/245154837","heartbeat_back_addr":"192.168.123.103:6807/245154837","heartbeat_front_addr":"192.168.123.103:6805/245154837","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:10:01.007650+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:10:37.118692+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6801/831425426":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/1758498356":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/741790616":"2026-03-11T11:08:40.387448+0000","192.168.123.100:6800/831425426":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/2736774605":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/2395073436":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/1541396627":"2026-03-11T11:08:40.387448+0000","192.168.123.100:6801/3038231949":"2026-03-11T11:08:29.276136+0000","192.168.123.100:6800/3038231949":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/3310260038":"2026-03-11T11:08:29.276136+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T11:10:48.542 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T11:10:48.542 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T11:10:48.542 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T11:10:48.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:48 vm00 bash[20758]: audit 2026-03-10T11:10:48.486497+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.100:0/3926951113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:10:48.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:48 vm00 bash[20758]: audit 2026-03-10T11:10:48.486497+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.100:0/3926951113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:10:49.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:48 vm03 bash[23405]: audit 2026-03-10T11:10:48.486497+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.100:0/3926951113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:10:49.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:48 vm03 bash[23405]: audit 2026-03-10T11:10:48.486497+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.100:0/3926951113' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:10:49.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:49 vm00 bash[20758]: cluster 2026-03-10T11:10:48.409542+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:49.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:49 vm00 bash[20758]: cluster 2026-03-10T11:10:48.409542+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:50.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:49 vm03 bash[23405]: cluster 2026-03-10T11:10:48.409542+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:50.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:49 vm03 bash[23405]: cluster 2026-03-10T11:10:48.409542+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:51.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:51 vm00 bash[20758]: cluster 2026-03-10T11:10:50.409812+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:51.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:51 vm00 bash[20758]: cluster 2026-03-10T11:10:50.409812+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:52.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:51 vm03 bash[23405]: cluster 2026-03-10T11:10:50.409812+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:52.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:51 vm03 bash[23405]: cluster 2026-03-10T11:10:50.409812+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:52.243 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:10:52.537 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-10T11:10:52.537 INFO:teuthology.orchestra.run.vm00.stdout: key = AQA8/K9poSzGHxAAG426ot0dCwFbrfjLpFFiVA== 2026-03-10T11:10:52.610 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T11:10:52.610 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T11:10:52.610 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T11:10:52.623 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T11:10:52.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:52 vm00 bash[20758]: audit 2026-03-10T11:10:52.532932+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:52.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:52 vm00 bash[20758]: audit 2026-03-10T11:10:52.532932+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:52.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:52 vm00 bash[20758]: audit 2026-03-10T11:10:52.535599+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:52.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:52 vm00 bash[20758]: audit 2026-03-10T11:10:52.535599+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:53.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:52 vm03 bash[23405]: audit 2026-03-10T11:10:52.532932+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:53.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:52 vm03 bash[23405]: audit 2026-03-10T11:10:52.532932+0000 mon.a (mon.0) 311 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:53.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:52 vm03 bash[23405]: audit 2026-03-10T11:10:52.535599+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:53.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:52 vm03 bash[23405]: audit 2026-03-10T11:10:52.535599+0000 mon.a (mon.0) 312 : audit [INF] from='client.? 192.168.123.100:0/2962144386' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:53.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:53 vm00 bash[20758]: cluster 2026-03-10T11:10:52.410015+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:53.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:53 vm00 bash[20758]: cluster 2026-03-10T11:10:52.410015+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:54.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:53 vm03 bash[23405]: cluster 2026-03-10T11:10:52.410015+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:54.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:53 vm03 bash[23405]: cluster 2026-03-10T11:10:52.410015+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:55.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:55 vm00 bash[20758]: cluster 2026-03-10T11:10:54.410264+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:55.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:55 vm00 bash[20758]: cluster 2026-03-10T11:10:54.410264+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:56.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:55 vm03 bash[23405]: cluster 2026-03-10T11:10:54.410264+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:56.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:55 vm03 bash[23405]: cluster 2026-03-10T11:10:54.410264+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:57.240 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.b/config 2026-03-10T11:10:57.538 INFO:teuthology.orchestra.run.vm03.stdout:[client.1] 2026-03-10T11:10:57.538 INFO:teuthology.orchestra.run.vm03.stdout: key = AQBB/K9pF2XTHxAAsmhWeWcjavqR3UW5UAbpEA== 2026-03-10T11:10:57.589 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-10T11:10:57.589 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T11:10:57.589 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T11:10:57.600 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T11:10:57.600 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T11:10:57.600 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph mgr dump --format=json 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: cluster 2026-03-10T11:10:56.410459+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: cluster 2026-03-10T11:10:56.410459+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: audit 2026-03-10T11:10:57.533083+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.103:0/3094504009' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: audit 2026-03-10T11:10:57.533083+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.103:0/3094504009' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: audit 2026-03-10T11:10:57.533829+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: audit 2026-03-10T11:10:57.533829+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: audit 2026-03-10T11:10:57.536545+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:57.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:57 vm03 bash[23405]: audit 2026-03-10T11:10:57.536545+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: cluster 2026-03-10T11:10:56.410459+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: cluster 2026-03-10T11:10:56.410459+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: audit 2026-03-10T11:10:57.533083+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.103:0/3094504009' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: audit 2026-03-10T11:10:57.533083+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.103:0/3094504009' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: audit 2026-03-10T11:10:57.533829+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: audit 2026-03-10T11:10:57.533829+0000 mon.a (mon.0) 313 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: audit 2026-03-10T11:10:57.536545+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:57.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:57 vm00 bash[20758]: audit 2026-03-10T11:10:57.536545+0000 mon.a (mon.0) 314 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:10:59.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:59 vm00 bash[20758]: cluster 2026-03-10T11:10:58.410668+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:10:59.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:10:59 vm00 bash[20758]: cluster 2026-03-10T11:10:58.410668+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:00.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:59 vm03 bash[23405]: cluster 2026-03-10T11:10:58.410668+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:00.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:10:59 vm03 bash[23405]: cluster 2026-03-10T11:10:58.410668+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:01 vm00 bash[20758]: cluster 2026-03-10T11:11:00.410879+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:01.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:01 vm00 bash[20758]: cluster 2026-03-10T11:11:00.410879+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:02.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:01 vm03 bash[23405]: cluster 2026-03-10T11:11:00.410879+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:02.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:01 vm03 bash[23405]: cluster 2026-03-10T11:11:00.410879+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:02.216 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:02.498 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:11:02.561 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":13,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":2479314195},{"type":"v1","addr":"192.168.123.100:6801","nonce":2479314195}]},"active_addr":"192.168.123.100:6801/2479314195","active_change":"2026-03-10T11:08:40.387558+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24105,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2901177304}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2887610697}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":716518078}]}]} 2026-03-10T11:11:02.563 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T11:11:02.563 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T11:11:02.563 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd dump --format=json 2026-03-10T11:11:02.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:02 vm00 bash[20758]: audit 2026-03-10T11:11:02.496592+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.100:0/4223852448' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:11:02.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:02 vm00 bash[20758]: audit 2026-03-10T11:11:02.496592+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.100:0/4223852448' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:11:03.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:02 vm03 bash[23405]: audit 2026-03-10T11:11:02.496592+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.100:0/4223852448' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:11:03.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:02 vm03 bash[23405]: audit 2026-03-10T11:11:02.496592+0000 mon.a (mon.0) 315 : audit [DBG] from='client.? 192.168.123.100:0/4223852448' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:11:03.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:03 vm00 bash[20758]: cluster 2026-03-10T11:11:02.411096+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:03.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:03 vm00 bash[20758]: cluster 2026-03-10T11:11:02.411096+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:04.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:03 vm03 bash[23405]: cluster 2026-03-10T11:11:02.411096+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:04.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:03 vm03 bash[23405]: cluster 2026-03-10T11:11:02.411096+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:05 vm00 bash[20758]: cluster 2026-03-10T11:11:04.411312+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:05.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:05 vm00 bash[20758]: cluster 2026-03-10T11:11:04.411312+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:06.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:05 vm03 bash[23405]: cluster 2026-03-10T11:11:04.411312+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:06.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:05 vm03 bash[23405]: cluster 2026-03-10T11:11:04.411312+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:06.229 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:06.474 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:11:06.474 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":14,"fsid":"507c5972-1c71-11f1-afff-ff6f68248060","created":"2026-03-10T11:08:18.119378+0000","modified":"2026-03-10T11:10:40.532603+0000","last_up_change":"2026-03-10T11:10:39.096755+0000","last_in_change":"2026-03-10T11:10:20.058227+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"fcfd3f1d-8445-47f0-911e-aa2f6ea0dada","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6803","nonce":2018763141}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6805","nonce":2018763141}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6809","nonce":2018763141}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6807","nonce":2018763141}]},"public_addr":"192.168.123.100:6803/2018763141","cluster_addr":"192.168.123.100:6805/2018763141","heartbeat_back_addr":"192.168.123.100:6809/2018763141","heartbeat_front_addr":"192.168.123.100:6807/2018763141","state":["exists","up"]},{"osd":1,"uuid":"df907036-2766-4eed-a794-6d9ac0e0f928","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6801","nonce":245154837}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6803","nonce":245154837}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6807","nonce":245154837}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6805","nonce":245154837}]},"public_addr":"192.168.123.103:6801/245154837","cluster_addr":"192.168.123.103:6803/245154837","heartbeat_back_addr":"192.168.123.103:6807/245154837","heartbeat_front_addr":"192.168.123.103:6805/245154837","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:10:01.007650+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:10:37.118692+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6801/831425426":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/1758498356":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/741790616":"2026-03-11T11:08:40.387448+0000","192.168.123.100:6800/831425426":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/2736774605":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/2395073436":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/1541396627":"2026-03-11T11:08:40.387448+0000","192.168.123.100:6801/3038231949":"2026-03-11T11:08:29.276136+0000","192.168.123.100:6800/3038231949":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/3310260038":"2026-03-11T11:08:29.276136+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T11:11:06.534 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T11:11:06.534 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd dump --format=json 2026-03-10T11:11:06.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:06 vm00 bash[20758]: audit 2026-03-10T11:11:06.474702+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.100:0/2472478285' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:06.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:06 vm00 bash[20758]: audit 2026-03-10T11:11:06.474702+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.100:0/2472478285' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:07.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:06 vm03 bash[23405]: audit 2026-03-10T11:11:06.474702+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.100:0/2472478285' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:07.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:06 vm03 bash[23405]: audit 2026-03-10T11:11:06.474702+0000 mon.a (mon.0) 316 : audit [DBG] from='client.? 192.168.123.100:0/2472478285' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:07.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:07 vm00 bash[20758]: cluster 2026-03-10T11:11:06.411522+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:07.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:07 vm00 bash[20758]: cluster 2026-03-10T11:11:06.411522+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:08.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:07 vm03 bash[23405]: cluster 2026-03-10T11:11:06.411522+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:08.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:07 vm03 bash[23405]: cluster 2026-03-10T11:11:06.411522+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:09.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:09 vm00 bash[20758]: cluster 2026-03-10T11:11:08.411791+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:09.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:09 vm00 bash[20758]: cluster 2026-03-10T11:11:08.411791+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:10.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:09 vm03 bash[23405]: cluster 2026-03-10T11:11:08.411791+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:10.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:09 vm03 bash[23405]: cluster 2026-03-10T11:11:08.411791+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:10.241 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:10.517 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:11:10.517 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":14,"fsid":"507c5972-1c71-11f1-afff-ff6f68248060","created":"2026-03-10T11:08:18.119378+0000","modified":"2026-03-10T11:10:40.532603+0000","last_up_change":"2026-03-10T11:10:39.096755+0000","last_in_change":"2026-03-10T11:10:20.058227+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"fcfd3f1d-8445-47f0-911e-aa2f6ea0dada","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6803","nonce":2018763141}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6805","nonce":2018763141}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6809","nonce":2018763141}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":2018763141},{"type":"v1","addr":"192.168.123.100:6807","nonce":2018763141}]},"public_addr":"192.168.123.100:6803/2018763141","cluster_addr":"192.168.123.100:6805/2018763141","heartbeat_back_addr":"192.168.123.100:6809/2018763141","heartbeat_front_addr":"192.168.123.100:6807/2018763141","state":["exists","up"]},{"osd":1,"uuid":"df907036-2766-4eed-a794-6d9ac0e0f928","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6801","nonce":245154837}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6803","nonce":245154837}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6807","nonce":245154837}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":245154837},{"type":"v1","addr":"192.168.123.103:6805","nonce":245154837}]},"public_addr":"192.168.123.103:6801/245154837","cluster_addr":"192.168.123.103:6803/245154837","heartbeat_back_addr":"192.168.123.103:6807/245154837","heartbeat_front_addr":"192.168.123.103:6805/245154837","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:10:01.007650+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:10:37.118692+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:6801/831425426":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/1758498356":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/741790616":"2026-03-11T11:08:40.387448+0000","192.168.123.100:6800/831425426":"2026-03-11T11:08:40.387448+0000","192.168.123.100:0/2736774605":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/2395073436":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/1541396627":"2026-03-11T11:08:40.387448+0000","192.168.123.100:6801/3038231949":"2026-03-11T11:08:29.276136+0000","192.168.123.100:6800/3038231949":"2026-03-11T11:08:29.276136+0000","192.168.123.100:0/3310260038":"2026-03-11T11:08:29.276136+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T11:11:10.571 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph tell osd.0 flush_pg_stats 2026-03-10T11:11:10.571 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph tell osd.1 flush_pg_stats 2026-03-10T11:11:10.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:10 vm00 bash[20758]: audit 2026-03-10T11:11:10.517848+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.100:0/704246948' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:10.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:10 vm00 bash[20758]: audit 2026-03-10T11:11:10.517848+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.100:0/704246948' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:11.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:10 vm03 bash[23405]: audit 2026-03-10T11:11:10.517848+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.100:0/704246948' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:11.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:10 vm03 bash[23405]: audit 2026-03-10T11:11:10.517848+0000 mon.a (mon.0) 317 : audit [DBG] from='client.? 192.168.123.100:0/704246948' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:11:11.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:11 vm00 bash[20758]: cluster 2026-03-10T11:11:10.412078+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:11.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:11 vm00 bash[20758]: cluster 2026-03-10T11:11:10.412078+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:12.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:11 vm03 bash[23405]: cluster 2026-03-10T11:11:10.412078+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:12.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:11 vm03 bash[23405]: cluster 2026-03-10T11:11:10.412078+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:13.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:13 vm00 bash[20758]: cluster 2026-03-10T11:11:12.412362+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:13.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:13 vm00 bash[20758]: cluster 2026-03-10T11:11:12.412362+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:14.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:13 vm03 bash[23405]: cluster 2026-03-10T11:11:12.412362+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:14.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:13 vm03 bash[23405]: cluster 2026-03-10T11:11:12.412362+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:14.254 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:14.254 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:14.469 INFO:teuthology.orchestra.run.vm00.stdout:34359738383 2026-03-10T11:11:14.469 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd last-stat-seq osd.0 2026-03-10T11:11:14.600 INFO:teuthology.orchestra.run.vm00.stdout:55834574857 2026-03-10T11:11:14.601 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph osd last-stat-seq osd.1 2026-03-10T11:11:15.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:15 vm00 bash[20758]: cluster 2026-03-10T11:11:14.412794+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:15.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:15 vm00 bash[20758]: cluster 2026-03-10T11:11:14.412794+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:16.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:15 vm03 bash[23405]: cluster 2026-03-10T11:11:14.412794+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:16.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:15 vm03 bash[23405]: cluster 2026-03-10T11:11:14.412794+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:17.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:17 vm00 bash[20758]: cluster 2026-03-10T11:11:16.413017+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:17.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:17 vm00 bash[20758]: cluster 2026-03-10T11:11:16.413017+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:18.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:17 vm03 bash[23405]: cluster 2026-03-10T11:11:16.413017+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:18.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:17 vm03 bash[23405]: cluster 2026-03-10T11:11:16.413017+0000 mgr.a (mgr.14150) 117 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:18.265 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:18.526 INFO:teuthology.orchestra.run.vm00.stdout:34359738384 2026-03-10T11:11:18.584 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738383 got 34359738384 for osd.0 2026-03-10T11:11:18.584 DEBUG:teuthology.parallel:result is None 2026-03-10T11:11:18.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:18 vm00 bash[20758]: audit 2026-03-10T11:11:18.525463+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.100:0/592283089' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:11:18.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:18 vm00 bash[20758]: audit 2026-03-10T11:11:18.525463+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.100:0/592283089' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:11:19.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:18 vm03 bash[23405]: audit 2026-03-10T11:11:18.525463+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.100:0/592283089' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:11:19.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:18 vm03 bash[23405]: audit 2026-03-10T11:11:18.525463+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.100:0/592283089' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:11:19.268 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:19.529 INFO:teuthology.orchestra.run.vm00.stdout:55834574858 2026-03-10T11:11:19.684 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574857 got 55834574858 for osd.1 2026-03-10T11:11:19.684 DEBUG:teuthology.parallel:result is None 2026-03-10T11:11:19.684 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T11:11:19.684 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph pg dump --format=json 2026-03-10T11:11:19.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:19 vm00 bash[20758]: cluster 2026-03-10T11:11:18.413268+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:19.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:19 vm00 bash[20758]: cluster 2026-03-10T11:11:18.413268+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:19.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:19 vm00 bash[20758]: audit 2026-03-10T11:11:19.529048+0000 mon.b (mon.1) 12 : audit [DBG] from='client.? 192.168.123.100:0/1898375824' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:11:19.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:19 vm00 bash[20758]: audit 2026-03-10T11:11:19.529048+0000 mon.b (mon.1) 12 : audit [DBG] from='client.? 192.168.123.100:0/1898375824' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:11:20.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:19 vm03 bash[23405]: cluster 2026-03-10T11:11:18.413268+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:20.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:19 vm03 bash[23405]: cluster 2026-03-10T11:11:18.413268+0000 mgr.a (mgr.14150) 118 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:20.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:19 vm03 bash[23405]: audit 2026-03-10T11:11:19.529048+0000 mon.b (mon.1) 12 : audit [DBG] from='client.? 192.168.123.100:0/1898375824' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:11:20.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:19 vm03 bash[23405]: audit 2026-03-10T11:11:19.529048+0000 mon.b (mon.1) 12 : audit [DBG] from='client.? 192.168.123.100:0/1898375824' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:11:21.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:21 vm00 bash[20758]: cluster 2026-03-10T11:11:20.413486+0000 mgr.a (mgr.14150) 119 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:21.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:21 vm00 bash[20758]: cluster 2026-03-10T11:11:20.413486+0000 mgr.a (mgr.14150) 119 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:22.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:21 vm03 bash[23405]: cluster 2026-03-10T11:11:20.413486+0000 mgr.a (mgr.14150) 119 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:22.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:21 vm03 bash[23405]: cluster 2026-03-10T11:11:20.413486+0000 mgr.a (mgr.14150) 119 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:24.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:23 vm03 bash[23405]: cluster 2026-03-10T11:11:22.413740+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:24.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:23 vm03 bash[23405]: cluster 2026-03-10T11:11:22.413740+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:24.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:23 vm00 bash[20758]: cluster 2026-03-10T11:11:22.413740+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:24.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:23 vm00 bash[20758]: cluster 2026-03-10T11:11:22.413740+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:24.305 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:24.579 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T11:11:24.579 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:11:24.631 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":86,"stamp":"2026-03-10T11:11:24.413915+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":2,"num_per_pool_osds":2,"num_per_pool_omap_osds":0,"kb":41934848,"kb_used":53928,"kb_used_data":240,"kb_used_omap":3,"kb_used_meta":53628,"kb_avail":41880920,"statfs":{"total":42941284352,"available":42886062080,"internally_reserved":0,"allocated":245760,"data_stored":60148,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":3180,"internal_metadata":54915988},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":1,"up_from":13,"seq":55834574859,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26960,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940464,"statfs":{"total":21470642176,"available":21443035136,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738385,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26968,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940456,"statfs":{"total":21470642176,"available":21443026944,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-10T11:11:24.632 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph pg dump --format=json 2026-03-10T11:11:26.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:25 vm03 bash[23405]: cluster 2026-03-10T11:11:24.414034+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:26.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:25 vm03 bash[23405]: cluster 2026-03-10T11:11:24.414034+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:26.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:25 vm03 bash[23405]: audit 2026-03-10T11:11:24.579652+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14272 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:26.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:25 vm03 bash[23405]: audit 2026-03-10T11:11:24.579652+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14272 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:26.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:25 vm00 bash[20758]: cluster 2026-03-10T11:11:24.414034+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:26.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:25 vm00 bash[20758]: cluster 2026-03-10T11:11:24.414034+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:26.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:25 vm00 bash[20758]: audit 2026-03-10T11:11:24.579652+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14272 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:26.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:25 vm00 bash[20758]: audit 2026-03-10T11:11:24.579652+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14272 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:28.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:27 vm03 bash[23405]: cluster 2026-03-10T11:11:26.414267+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:28.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:27 vm03 bash[23405]: cluster 2026-03-10T11:11:26.414267+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:28.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:27 vm00 bash[20758]: cluster 2026-03-10T11:11:26.414267+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:28.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:27 vm00 bash[20758]: cluster 2026-03-10T11:11:26.414267+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v87: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:28.318 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:28.563 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:11:28.563 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T11:11:28.614 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":88,"stamp":"2026-03-10T11:11:28.414427+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":2,"num_per_pool_osds":2,"num_per_pool_omap_osds":0,"kb":41934848,"kb_used":53928,"kb_used_data":240,"kb_used_omap":3,"kb_used_meta":53628,"kb_avail":41880920,"statfs":{"total":42941284352,"available":42886062080,"internally_reserved":0,"allocated":245760,"data_stored":60148,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":3180,"internal_metadata":54915988},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":1,"up_from":13,"seq":55834574860,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26960,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940464,"statfs":{"total":21470642176,"available":21443035136,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738386,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26968,"kb_used_data":120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940456,"statfs":{"total":21470642176,"available":21443026944,"internally_reserved":0,"allocated":122880,"data_stored":30074,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-10T11:11:28.615 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T11:11:28.615 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T11:11:28.615 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T11:11:28.615 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph health --format=json 2026-03-10T11:11:30.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:29 vm03 bash[23405]: cluster 2026-03-10T11:11:28.414527+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:30.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:29 vm03 bash[23405]: cluster 2026-03-10T11:11:28.414527+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:30.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:29 vm03 bash[23405]: audit 2026-03-10T11:11:28.563713+0000 mgr.a (mgr.14150) 125 : audit [DBG] from='client.14276 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:30.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:29 vm03 bash[23405]: audit 2026-03-10T11:11:28.563713+0000 mgr.a (mgr.14150) 125 : audit [DBG] from='client.14276 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:30.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:29 vm00 bash[20758]: cluster 2026-03-10T11:11:28.414527+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:30.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:29 vm00 bash[20758]: cluster 2026-03-10T11:11:28.414527+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:30.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:29 vm00 bash[20758]: audit 2026-03-10T11:11:28.563713+0000 mgr.a (mgr.14150) 125 : audit [DBG] from='client.14276 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:30.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:29 vm00 bash[20758]: audit 2026-03-10T11:11:28.563713+0000 mgr.a (mgr.14150) 125 : audit [DBG] from='client.14276 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:32.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:31 vm03 bash[23405]: cluster 2026-03-10T11:11:30.414818+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:32.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:31 vm03 bash[23405]: cluster 2026-03-10T11:11:30.414818+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:31 vm00 bash[20758]: cluster 2026-03-10T11:11:30.414818+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:32.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:31 vm00 bash[20758]: cluster 2026-03-10T11:11:30.414818+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:32.331 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:32.650 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T11:11:32.650 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T11:11:32.704 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T11:11:32.704 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T11:11:32.704 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T11:11:32.706 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T11:11:32.706 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- bash -c 'set -ex 2026-03-10T11:11:32.706 DEBUG:teuthology.orchestra.run.vm00:> HOSTNAMES=$(ceph orch host ls --format json | jq -r '"'"'.[] | .hostname'"'"') 2026-03-10T11:11:32.706 DEBUG:teuthology.orchestra.run.vm00:> for host in $HOSTNAMES; do 2026-03-10T11:11:32.706 DEBUG:teuthology.orchestra.run.vm00:> # do a check-host on each host to make sure it'"'"'s reachable 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> ceph cephadm check-host ${host} 2> ${host}-ok.txt 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> HOST_OK=$(cat ${host}-ok.txt) 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> if ! grep -q "Host looks OK" <<< "$HOST_OK"; then 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> printf "Failed host check:\n\n$HOST_OK" 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> exit 1 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> fi 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> done 2026-03-10T11:11:32.707 DEBUG:teuthology.orchestra.run.vm00:> ' 2026-03-10T11:11:32.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:32 vm00 bash[20758]: audit 2026-03-10T11:11:32.650910+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/3107709485' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:11:32.982 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:32 vm00 bash[20758]: audit 2026-03-10T11:11:32.650910+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/3107709485' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:11:33.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:32 vm03 bash[23405]: audit 2026-03-10T11:11:32.650910+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/3107709485' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:11:33.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:32 vm03 bash[23405]: audit 2026-03-10T11:11:32.650910+0000 mon.a (mon.0) 318 : audit [DBG] from='client.? 192.168.123.100:0/3107709485' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:11:34.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:33 vm03 bash[23405]: cluster 2026-03-10T11:11:32.415066+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:34.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:33 vm03 bash[23405]: cluster 2026-03-10T11:11:32.415066+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:34.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:33 vm00 bash[20758]: cluster 2026-03-10T11:11:32.415066+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:34.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:33 vm00 bash[20758]: cluster 2026-03-10T11:11:32.415066+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v90: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:36.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:35 vm03 bash[23405]: cluster 2026-03-10T11:11:34.415323+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:36.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:35 vm03 bash[23405]: cluster 2026-03-10T11:11:34.415323+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:36.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:35 vm00 bash[20758]: cluster 2026-03-10T11:11:34.415323+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:36.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:35 vm00 bash[20758]: cluster 2026-03-10T11:11:34.415323+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v91: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:36.342 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:36.435 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r '.[] | .hostname' 2026-03-10T11:11:36.435 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch host ls --format json 2026-03-10T11:11:36.587 INFO:teuthology.orchestra.run.vm00.stderr:+ HOSTNAMES='vm00 2026-03-10T11:11:36.587 INFO:teuthology.orchestra.run.vm00.stderr:vm03' 2026-03-10T11:11:36.587 INFO:teuthology.orchestra.run.vm00.stderr:+ for host in $HOSTNAMES 2026-03-10T11:11:36.587 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph cephadm check-host vm00 2026-03-10T11:11:37.020 INFO:teuthology.orchestra.run.vm00.stdout:vm00 (None) ok 2026-03-10T11:11:37.029 INFO:teuthology.orchestra.run.vm00.stderr:++ cat vm00-ok.txt 2026-03-10T11:11:37.030 INFO:teuthology.orchestra.run.vm00.stderr:+ HOST_OK='docker (/usr/bin/docker) is present 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:systemctl is present 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:lvcreate is present 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:Unit ntp.service is enabled and running 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:Hostname "vm00" matches what is expected. 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:Host looks OK' 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:+ grep -q 'Host looks OK' 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:+ for host in $HOSTNAMES 2026-03-10T11:11:37.031 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph cephadm check-host vm03 2026-03-10T11:11:37.492 INFO:teuthology.orchestra.run.vm00.stdout:vm03 (None) ok 2026-03-10T11:11:37.501 INFO:teuthology.orchestra.run.vm00.stderr:++ cat vm03-ok.txt 2026-03-10T11:11:37.502 INFO:teuthology.orchestra.run.vm00.stderr:+ HOST_OK='docker (/usr/bin/docker) is present 2026-03-10T11:11:37.502 INFO:teuthology.orchestra.run.vm00.stderr:systemctl is present 2026-03-10T11:11:37.502 INFO:teuthology.orchestra.run.vm00.stderr:lvcreate is present 2026-03-10T11:11:37.502 INFO:teuthology.orchestra.run.vm00.stderr:Unit ntp.service is enabled and running 2026-03-10T11:11:37.502 INFO:teuthology.orchestra.run.vm00.stderr:Hostname "vm03" matches what is expected. 2026-03-10T11:11:37.502 INFO:teuthology.orchestra.run.vm00.stderr:Host looks OK' 2026-03-10T11:11:37.503 INFO:teuthology.orchestra.run.vm00.stderr:+ grep -q 'Host looks OK' 2026-03-10T11:11:37.548 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T11:11:37.550 INFO:tasks.cephadm:Teardown begin 2026-03-10T11:11:37.550 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:11:37.558 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:11:37.565 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T11:11:37.565 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 507c5972-1c71-11f1-afff-ff6f68248060 -- ceph mgr module disable cephadm 2026-03-10T11:11:37.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:37 vm00 bash[20758]: cluster 2026-03-10T11:11:36.415515+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v92: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:37.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:37 vm00 bash[20758]: cluster 2026-03-10T11:11:36.415515+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v92: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:37.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:37 vm00 bash[20758]: audit 2026-03-10T11:11:36.577544+0000 mgr.a (mgr.14150) 130 : audit [DBG] from='client.14284 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:37.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:37 vm00 bash[20758]: audit 2026-03-10T11:11:36.577544+0000 mgr.a (mgr.14150) 130 : audit [DBG] from='client.14284 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:37.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:37 vm00 bash[20758]: audit 2026-03-10T11:11:36.736956+0000 mgr.a (mgr.14150) 131 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:37.775 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:37 vm00 bash[20758]: audit 2026-03-10T11:11:36.736956+0000 mgr.a (mgr.14150) 131 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:37.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:37 vm03 bash[23405]: cluster 2026-03-10T11:11:36.415515+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v92: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:37.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:37 vm03 bash[23405]: cluster 2026-03-10T11:11:36.415515+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v92: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:37.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:37 vm03 bash[23405]: audit 2026-03-10T11:11:36.577544+0000 mgr.a (mgr.14150) 130 : audit [DBG] from='client.14284 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:37.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:37 vm03 bash[23405]: audit 2026-03-10T11:11:36.577544+0000 mgr.a (mgr.14150) 130 : audit [DBG] from='client.14284 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:11:37.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:37 vm03 bash[23405]: audit 2026-03-10T11:11:36.736956+0000 mgr.a (mgr.14150) 131 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:37.817 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:37 vm03 bash[23405]: audit 2026-03-10T11:11:36.736956+0000 mgr.a (mgr.14150) 131 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm00", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:39.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:38 vm03 bash[23405]: audit 2026-03-10T11:11:37.186826+0000 mgr.a (mgr.14150) 132 : audit [DBG] from='client.14292 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:39.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:38 vm03 bash[23405]: audit 2026-03-10T11:11:37.186826+0000 mgr.a (mgr.14150) 132 : audit [DBG] from='client.14292 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:39.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:38 vm00 bash[20758]: audit 2026-03-10T11:11:37.186826+0000 mgr.a (mgr.14150) 132 : audit [DBG] from='client.14292 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:39.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:38 vm00 bash[20758]: audit 2026-03-10T11:11:37.186826+0000 mgr.a (mgr.14150) 132 : audit [DBG] from='client.14292 -' entity='client.admin' cmd=[{"prefix": "cephadm check-host", "host": "vm03", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:11:40.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:39 vm03 bash[23405]: cluster 2026-03-10T11:11:38.415780+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v93: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:40.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:39 vm03 bash[23405]: cluster 2026-03-10T11:11:38.415780+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v93: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:40.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:39 vm00 bash[20758]: cluster 2026-03-10T11:11:38.415780+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v93: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:40.232 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:39 vm00 bash[20758]: cluster 2026-03-10T11:11:38.415780+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v93: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:42.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:41 vm03 bash[23405]: cluster 2026-03-10T11:11:40.415969+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v94: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:42.067 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:41 vm03 bash[23405]: cluster 2026-03-10T11:11:40.415969+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v94: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:42.211 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/mon.a/config 2026-03-10T11:11:42.224 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:41 vm00 bash[20758]: cluster 2026-03-10T11:11:40.415969+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v94: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:42.224 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:41 vm00 bash[20758]: cluster 2026-03-10T11:11:40.415969+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v94: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:11:42.371 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T11:11:42.371 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T11:11:42.371 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T11:11:42.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T11:11:42.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T11:11:42.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T11:11:42.372 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T11:11:42.370+0000 7fe8d0b9e640 -1 monclient: keyring not found 2026-03-10T11:11:42.372 INFO:teuthology.orchestra.run.vm00.stderr:[errno 21] error connecting to the cluster 2026-03-10T11:11:42.425 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:11:42.425 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T11:11:42.425 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T11:11:42.428 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T11:11:42.431 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T11:11:42.431 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T11:11:42.431 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a 2026-03-10T11:11:42.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:42 vm00 systemd[1]: Stopping Ceph mon.a for 507c5972-1c71-11f1-afff-ff6f68248060... 2026-03-10T11:11:42.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:42 vm00 bash[20758]: debug 2026-03-10T11:11:42.534+0000 7ffb49928640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:11:42.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:42 vm00 bash[20758]: debug 2026-03-10T11:11:42.534+0000 7ffb49928640 -1 mon.a@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T11:11:42.678 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 11:11:42 vm00 bash[35727]: ceph-507c5972-1c71-11f1-afff-ff6f68248060-mon-a 2026-03-10T11:11:42.734 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.a.service' 2026-03-10T11:11:42.750 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:11:42.750 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T11:11:42.750 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T11:11:42.750 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.b 2026-03-10T11:11:43.001 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.b.service' 2026-03-10T11:11:43.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:42 vm03 systemd[1]: Stopping Ceph mon.b for 507c5972-1c71-11f1-afff-ff6f68248060... 2026-03-10T11:11:43.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:42 vm03 bash[23405]: debug 2026-03-10T11:11:42.797+0000 7f77667f3640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:11:43.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:42 vm03 bash[23405]: debug 2026-03-10T11:11:42.797+0000 7f77667f3640 -1 mon.b@1(peon) e2 *** Got Signal Terminated *** 2026-03-10T11:11:43.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:42 vm03 bash[30648]: ceph-507c5972-1c71-11f1-afff-ff6f68248060-mon-b 2026-03-10T11:11:43.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:43 vm03 systemd[1]: ceph-507c5972-1c71-11f1-afff-ff6f68248060@mon.b.service: Deactivated successfully. 2026-03-10T11:11:43.009 INFO:journalctl@ceph.mon.b.vm03.stdout:Mar 10 11:11:43 vm03 systemd[1]: Stopped Ceph mon.b for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:11:43.015 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:11:43.015 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T11:11:43.015 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T11:11:43.015 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a 2026-03-10T11:11:43.183 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.a.service' 2026-03-10T11:11:43.194 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:11:43.194 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T11:11:43.194 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-10T11:11:43.194 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.b 2026-03-10T11:11:43.271 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:11:43 vm03 systemd[1]: Stopping Ceph mgr.b for 507c5972-1c71-11f1-afff-ff6f68248060... 2026-03-10T11:11:43.335 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.b.service' 2026-03-10T11:11:43.341 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:11:43 vm03 bash[30743]: ceph-507c5972-1c71-11f1-afff-ff6f68248060-mgr-b 2026-03-10T11:11:43.341 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:11:43 vm03 systemd[1]: ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:11:43.341 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:11:43 vm03 systemd[1]: ceph-507c5972-1c71-11f1-afff-ff6f68248060@mgr.b.service: Failed with result 'exit-code'. 2026-03-10T11:11:43.341 INFO:journalctl@ceph.mgr.b.vm03.stdout:Mar 10 11:11:43 vm03 systemd[1]: Stopped Ceph mgr.b for 507c5972-1c71-11f1-afff-ff6f68248060. 2026-03-10T11:11:43.345 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:11:43.345 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-10T11:11:43.345 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T11:11:43.345 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-507c5972-1c71-11f1-afff-ff6f68248060@osd.0 2026-03-10T11:11:43.732 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 11:11:43 vm00 systemd[1]: Stopping Ceph osd.0 for 507c5972-1c71-11f1-afff-ff6f68248060... 2026-03-10T11:11:43.732 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 11:11:43 vm00 bash[30502]: debug 2026-03-10T11:11:43.398+0000 7f34a07a2640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:11:43.732 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 11:11:43 vm00 bash[30502]: debug 2026-03-10T11:11:43.398+0000 7f34a07a2640 -1 osd.0 14 *** Got signal Terminated *** 2026-03-10T11:11:43.732 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 11:11:43 vm00 bash[30502]: debug 2026-03-10T11:11:43.398+0000 7f34a07a2640 -1 osd.0 14 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:11:48.732 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 11:11:48 vm00 bash[35913]: ceph-507c5972-1c71-11f1-afff-ff6f68248060-osd-0 2026-03-10T11:11:48.767 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@osd.0.service' 2026-03-10T11:11:48.800 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:11:48.800 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T11:11:48.800 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T11:11:48.800 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-507c5972-1c71-11f1-afff-ff6f68248060@osd.1 2026-03-10T11:11:49.067 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 11:11:48 vm03 systemd[1]: Stopping Ceph osd.1 for 507c5972-1c71-11f1-afff-ff6f68248060... 2026-03-10T11:11:49.067 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 11:11:48 vm03 bash[26633]: debug 2026-03-10T11:11:48.841+0000 7f2b8fd95640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:11:49.067 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 11:11:48 vm03 bash[26633]: debug 2026-03-10T11:11:48.841+0000 7f2b8fd95640 -1 osd.1 14 *** Got signal Terminated *** 2026-03-10T11:11:49.067 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 11:11:48 vm03 bash[26633]: debug 2026-03-10T11:11:48.841+0000 7f2b8fd95640 -1 osd.1 14 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:11:54.207 INFO:journalctl@ceph.osd.1.vm03.stdout:Mar 10 11:11:53 vm03 bash[30830]: ceph-507c5972-1c71-11f1-afff-ff6f68248060-osd-1 2026-03-10T11:11:54.258 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-507c5972-1c71-11f1-afff-ff6f68248060@osd.1.service' 2026-03-10T11:11:54.274 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:11:54.274 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T11:11:54.274 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 507c5972-1c71-11f1-afff-ff6f68248060 --force --keep-logs 2026-03-10T11:11:54.370 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:11:56.426 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 507c5972-1c71-11f1-afff-ff6f68248060 --force --keep-logs 2026-03-10T11:11:56.517 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:11:58.476 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:11:58.483 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T11:11:58.484 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:11:58.484 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:11:58.491 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T11:11:58.491 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009/remote/vm00/crash 2026-03-10T11:11:58.492 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/crash -- . 2026-03-10T11:11:58.533 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/crash: Cannot open: No such file or directory 2026-03-10T11:11:58.533 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-10T11:11:58.534 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009/remote/vm03/crash 2026-03-10T11:11:58.534 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/crash -- . 2026-03-10T11:11:58.542 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/507c5972-1c71-11f1-afff-ff6f68248060/crash: Cannot open: No such file or directory 2026-03-10T11:11:58.542 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-10T11:11:58.543 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T11:11:58.543 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1 2026-03-10T11:11:58.582 INFO:tasks.cephadm:Compressing logs... 2026-03-10T11:11:58.582 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:11:58.623 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:11:58.628 INFO:teuthology.orchestra.run.vm00.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T11:11:58.628 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T11:11:58.629 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mgr.a.log 2026-03-10T11:11:58.629 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log 2026-03-10T11:11:58.630 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mgr.a.log: 89.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T11:11:58.630 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mon.a.log 2026-03-10T11:11:58.631 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log: 85.2% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log.gz 2026-03-10T11:11:58.631 INFO:teuthology.orchestra.run.vm03.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T11:11:58.631 INFO:teuthology.orchestra.run.vm03.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T11:11:58.631 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.audit.log 2026-03-10T11:11:58.632 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log 2026-03-10T11:11:58.632 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mon.b.log 2026-03-10T11:11:58.632 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log: 85.1% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.log.gz 2026-03-10T11:11:58.632 INFO:teuthology.orchestra.run.vm03.stderr: 88.3% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T11:11:58.633 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-osd.1.log 2026-03-10T11:11:58.633 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mgr.b.log 2026-03-10T11:11:58.639 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-volume.log 2026-03-10T11:11:58.639 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.audit.log: 89.1% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.audit.log.gz 2026-03-10T11:11:58.641 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-osd.1.log: 94.2% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-osd.1.log.gz 2026-03-10T11:11:58.641 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.audit.log 2026-03-10T11:11:58.643 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.cephadm.log 2026-03-10T11:11:58.643 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mgr.b.log: 90.4% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mgr.b.log.gz 2026-03-10T11:11:58.643 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-volume.log 2026-03-10T11:11:58.644 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.audit.log: 89.2% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.audit.log.gz 2026-03-10T11:11:58.644 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.cephadm.log 2026-03-10T11:11:58.647 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-volume.log.gz 2026-03-10T11:11:58.647 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.cephadm.log: 74.8% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.cephadm.log.gz 2026-03-10T11:11:58.651 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-volume.log.gz 2026-03-10T11:11:58.651 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-osd.0.log 2026-03-10T11:11:58.651 INFO:teuthology.orchestra.run.vm03.stderr: 92.7% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mon.b.log.gz 2026-03-10T11:11:58.651 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.cephadm.log: 78.6% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph.cephadm.log.gz 2026-03-10T11:11:58.652 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-10T11:11:58.652 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.027s 2026-03-10T11:11:58.652 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.037s 2026-03-10T11:11:58.652 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.009s 2026-03-10T11:11:58.653 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-osd.0.log: 89.7% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mgr.a.log.gz 2026-03-10T11:11:58.661 INFO:teuthology.orchestra.run.vm00.stderr: 94.0% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-osd.0.log.gz 2026-03-10T11:11:58.698 INFO:teuthology.orchestra.run.vm00.stderr: 91.1% -- replaced with /var/log/ceph/507c5972-1c71-11f1-afff-ff6f68248060/ceph-mon.a.log.gz 2026-03-10T11:11:58.699 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-10T11:11:58.699 INFO:teuthology.orchestra.run.vm00.stderr:real 0m0.075s 2026-03-10T11:11:58.699 INFO:teuthology.orchestra.run.vm00.stderr:user 0m0.106s 2026-03-10T11:11:58.699 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m0.003s 2026-03-10T11:11:58.699 INFO:tasks.cephadm:Archiving logs... 2026-03-10T11:11:58.699 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009/remote/vm00/log 2026-03-10T11:11:58.699 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T11:11:58.756 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009/remote/vm03/log 2026-03-10T11:11:58.756 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T11:11:58.766 INFO:tasks.cephadm:Removing cluster... 2026-03-10T11:11:58.766 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 507c5972-1c71-11f1-afff-ff6f68248060 --force 2026-03-10T11:11:58.883 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:12:00.159 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 507c5972-1c71-11f1-afff-ff6f68248060 --force 2026-03-10T11:12:00.251 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: 507c5972-1c71-11f1-afff-ff6f68248060 2026-03-10T11:12:01.491 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T11:12:01.491 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T11:12:01.494 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T11:12:01.497 INFO:tasks.cephadm:Teardown complete 2026-03-10T11:12:01.497 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T11:12:01.500 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T11:12:01.500 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T11:12:01.539 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T11:12:01.555 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T11:12:01.555 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T11:12:01.559 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T11:12:01.559 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T11:12:01.622 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:01.625 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:01.816 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:01.817 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:01.825 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:01.826 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:01.986 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:01.986 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:12:01.987 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T11:12:01.987 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:01.997 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:01.997 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-10T11:12:02.029 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:02.029 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:12:02.030 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T11:12:02.030 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:02.041 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:02.042 INFO:teuthology.orchestra.run.vm00.stdout: ceph* 2026-03-10T11:12:02.166 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:02.166 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T11:12:02.198 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T11:12:02.200 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:02.211 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:02.211 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T11:12:02.244 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T11:12:02.245 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:03.346 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:03.353 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:03.381 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:03.391 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:03.566 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:03.567 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:03.613 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:03.614 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:03.772 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:03.772 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:12:03.773 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T11:12:03.773 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:03.793 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:03.794 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T11:12:03.868 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:03.869 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:12:03.869 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T11:12:03.869 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:03.884 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:03.885 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T11:12:03.989 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T11:12:03.989 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T11:12:04.022 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T11:12:04.023 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:04.041 INFO:teuthology.orchestra.run.vm03.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:04.073 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T11:12:04.073 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T11:12:04.075 INFO:teuthology.orchestra.run.vm03.stdout:Looking for files to backup/remove ... 2026-03-10T11:12:04.077 INFO:teuthology.orchestra.run.vm03.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T11:12:04.079 INFO:teuthology.orchestra.run.vm03.stdout:Removing user `cephadm' ... 2026-03-10T11:12:04.079 INFO:teuthology.orchestra.run.vm03.stdout:Warning: group `nogroup' has no more members. 2026-03-10T11:12:04.089 INFO:teuthology.orchestra.run.vm03.stdout:Done. 2026-03-10T11:12:04.111 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T11:12:04.111 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:04.113 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:04.133 INFO:teuthology.orchestra.run.vm00.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:04.167 INFO:teuthology.orchestra.run.vm00.stdout:Looking for files to backup/remove ... 2026-03-10T11:12:04.169 INFO:teuthology.orchestra.run.vm00.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T11:12:04.172 INFO:teuthology.orchestra.run.vm00.stdout:Removing user `cephadm' ... 2026-03-10T11:12:04.172 INFO:teuthology.orchestra.run.vm00.stdout:Warning: group `nogroup' has no more members. 2026-03-10T11:12:04.182 INFO:teuthology.orchestra.run.vm00.stdout:Done. 2026-03-10T11:12:04.205 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:04.215 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T11:12:04.217 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:04.312 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T11:12:04.315 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:05.462 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:05.498 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:05.522 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:05.558 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:05.702 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:05.703 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:05.791 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:05.792 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:05.890 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:05.890 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:12:05.891 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T11:12:05.891 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:05.910 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:05.912 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-10T11:12:06.028 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:06.029 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T11:12:06.029 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T11:12:06.029 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:06.046 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:06.048 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds* 2026-03-10T11:12:06.112 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:06.112 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T11:12:06.157 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T11:12:06.159 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:06.251 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:06.251 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T11:12:06.300 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T11:12:06.303 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:06.629 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:06.740 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T11:12:06.743 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:06.748 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:06.885 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T11:12:06.887 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:08.440 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:08.475 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:08.569 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:08.605 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:08.701 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:08.701 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:08.836 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:08.837 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:08.939 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:08.939 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T11:12:08.940 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:08.941 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev 2026-03-10T11:12:08.941 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:08.955 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:08.955 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T11:12:08.955 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* 2026-03-10T11:12:09.055 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:09.055 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev 2026-03-10T11:12:09.057 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:09.076 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:09.076 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T11:12:09.077 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents* 2026-03-10T11:12:09.136 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-10T11:12:09.136 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T11:12:09.180 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T11:12:09.183 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.194 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.220 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.259 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.275 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-10T11:12:09.275 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T11:12:09.315 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T11:12:09.318 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.332 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.361 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.403 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.747 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T11:12:09.750 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:09.886 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T11:12:09.889 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:11.371 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:11.404 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:11.409 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:11.443 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:11.633 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:11.633 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:11.634 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:11.634 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:11.804 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:11.812 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:11.813 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T11:12:11.829 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:11.829 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:11.829 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:11.830 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:11.845 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:11.846 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T11:12:11.975 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T11:12:11.976 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T11:12:12.016 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T11:12:12.019 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:12.056 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T11:12:12.056 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T11:12:12.095 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:12.112 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T11:12:12.114 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:12.184 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:12.556 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:12.626 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:12.989 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:13.095 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:13.424 INFO:teuthology.orchestra.run.vm03.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:13.604 INFO:teuthology.orchestra.run.vm00.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:13.894 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:13.935 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:14.013 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:14.052 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:14.414 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:14.449 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:14.494 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:14.531 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T11:12:14.534 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:14.535 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:14.614 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T11:12:14.617 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:15.118 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:15.271 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:15.526 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:15.715 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:15.968 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:16.153 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:16.415 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:16.623 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:18.122 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:18.160 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:18.385 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:18.386 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:18.447 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:18.480 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:18.557 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:18.557 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:18.557 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:18.558 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:18.559 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:18.559 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:18.576 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:18.577 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-10T11:12:18.699 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:18.699 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:18.776 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:18.777 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T11:12:18.821 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T11:12:18.823 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:18.932 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:18.932 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:18.932 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:18.933 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:18.933 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:18.933 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:18.933 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:18.933 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:18.934 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:18.953 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:18.954 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse* 2026-03-10T11:12:19.140 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:19.140 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T11:12:19.187 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T11:12:19.190 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:19.280 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:19.365 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T11:12:19.366 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:19.616 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:19.710 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T11:12:19.712 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:20.932 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:20.970 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:21.177 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:21.178 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:21.382 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T11:12:21.383 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:21.383 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:21.383 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:21.384 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:21.419 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:21.419 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:21.419 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:21.453 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:21.456 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:21.629 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:21.630 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:21.669 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:21.670 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:21.855 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T11:12:21.855 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:21.855 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:21.855 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:21.856 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:21.877 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:21.877 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:21.886 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T11:12:21.886 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:21.886 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:21.886 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:21.887 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:21.913 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:21.919 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:21.919 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:21.954 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:22.135 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:22.136 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:22.164 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:22.165 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:22.345 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:22.348 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:22.349 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:22.350 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:22.350 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:22.360 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:22.360 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:22.377 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:22.377 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:22.396 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:22.410 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:22.610 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:22.610 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:22.624 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:22.624 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:22.748 INFO:teuthology.orchestra.run.vm00.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T11:12:22.748 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:22.748 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:22.748 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T11:12:22.749 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:22.765 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:22.765 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:22.765 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:22.765 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T11:12:22.766 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:22.782 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:22.783 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:22.785 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:22.785 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T11:12:22.815 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:23.007 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-10T11:12:23.007 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T11:12:23.053 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:23.054 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:23.056 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T11:12:23.059 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:23.072 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:23.083 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:23.328 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:23.328 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:23.328 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:23.328 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T11:12:23.329 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:23.347 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:23.347 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T11:12:23.542 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-10T11:12:23.542 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T11:12:23.584 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T11:12:23.586 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:23.600 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:23.612 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:24.279 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:24.313 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:24.518 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:24.518 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:24.714 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T11:12:24.714 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:24.714 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:24.714 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:24.714 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T11:12:24.715 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:24.742 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:24.742 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:24.776 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:24.777 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:24.812 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:24.998 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:24.999 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:25.028 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:25.028 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:25.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:25.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:25.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:25.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:25.171 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:25.171 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T11:12:25.171 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:25.184 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:25.184 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:25.188 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T11:12:25.188 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:25.189 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:25.189 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:25.189 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:25.189 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T11:12:25.190 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:25.216 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:25.216 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:25.219 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:25.254 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:25.447 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:25.448 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:25.478 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:25.479 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:25.661 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:25.661 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:25.661 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:25.662 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:25.663 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:25.663 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:25.663 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:25.663 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T11:12:25.663 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:25.670 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:25.670 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* 2026-03-10T11:12:25.696 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T11:12:25.696 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:25.696 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:25.696 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:25.696 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T11:12:25.697 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:25.721 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:25.722 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:25.756 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:25.832 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:25.832 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T11:12:25.876 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T11:12:25.879 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:25.956 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:25.957 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:26.110 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:26.110 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:26.110 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:26.110 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:26.111 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:26.112 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:26.112 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T11:12:26.112 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:26.127 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:26.127 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd* 2026-03-10T11:12:26.311 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-10T11:12:26.311 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T11:12:26.354 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T11:12:26.356 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:27.139 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:27.173 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:27.394 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:27.395 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:27.453 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:27.487 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:27.528 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T11:12:27.529 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:27.537 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:27.537 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* libcephfs2* 2026-03-10T11:12:27.662 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:27.663 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:27.708 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T11:12:27.708 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T11:12:27.747 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T11:12:27.750 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:27.762 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:27.785 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:27.785 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:27.785 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:27.785 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T11:12:27.786 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:27.787 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:27.796 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:27.796 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev* libcephfs2* 2026-03-10T11:12:27.971 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-10T11:12:27.971 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T11:12:28.013 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T11:12:28.015 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:28.026 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:28.052 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:28.890 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:28.925 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:29.115 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:29.115 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:29.178 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:29.212 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:29.312 INFO:teuthology.orchestra.run.vm03.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T11:12:29.312 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:29.312 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:29.312 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:29.312 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:29.313 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:29.314 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:29.314 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:29.314 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:29.314 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:29.314 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-10T11:12:29.314 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:29.343 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:29.343 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:29.376 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:29.417 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:29.418 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:29.571 INFO:teuthology.orchestra.run.vm00.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T11:12:29.571 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:29.571 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:29.571 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T11:12:29.571 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T11:12:29.571 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T11:12:29.572 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:29.588 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:29.589 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:29.591 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:29.591 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:29.624 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:29.795 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:29.795 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:29.795 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:29.795 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:29.795 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:29.796 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:29.797 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:29.797 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:29.797 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:29.797 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:29.797 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:29.812 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:29.813 INFO:teuthology.orchestra.run.vm03.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T11:12:29.813 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T11:12:29.854 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:29.855 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:29.991 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:29.999 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:29.999 INFO:teuthology.orchestra.run.vm00.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T11:12:30.000 INFO:teuthology.orchestra.run.vm00.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T11:12:30.016 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T11:12:30.017 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T11:12:30.064 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T11:12:30.066 INFO:teuthology.orchestra.run.vm03.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.079 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.092 INFO:teuthology.orchestra.run.vm03.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.103 INFO:teuthology.orchestra.run.vm03.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T11:12:30.191 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-10T11:12:30.191 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T11:12:30.235 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T11:12:30.238 INFO:teuthology.orchestra.run.vm00.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.250 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.264 INFO:teuthology.orchestra.run.vm00.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.275 INFO:teuthology.orchestra.run.vm00.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T11:12:30.493 INFO:teuthology.orchestra.run.vm03.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.505 INFO:teuthology.orchestra.run.vm03.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.518 INFO:teuthology.orchestra.run.vm03.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.544 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:30.578 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:30.649 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T11:12:30.651 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T11:12:30.706 INFO:teuthology.orchestra.run.vm00.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.720 INFO:teuthology.orchestra.run.vm00.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.733 INFO:teuthology.orchestra.run.vm00.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:30.759 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:30.793 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:30.866 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T11:12:30.868 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T11:12:32.105 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:32.138 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:32.341 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:32.357 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:32.358 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:32.375 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:32.515 INFO:teuthology.orchestra.run.vm03.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T11:12:32.515 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:32.515 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:32.515 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:32.515 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:32.515 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:32.516 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:32.537 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:32.537 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:32.570 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:32.583 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:32.584 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:32.698 INFO:teuthology.orchestra.run.vm00.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T11:12:32.698 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:32.699 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:32.700 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:32.700 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:32.715 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:32.715 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:32.745 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:32.761 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:32.761 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:32.889 INFO:teuthology.orchestra.run.vm03.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T11:12:32.889 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:32.889 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:32.889 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:32.889 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:32.889 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:32.890 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:32.909 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:32.909 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:32.910 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T11:12:32.942 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:32.943 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:32.966 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T11:12:33.041 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:33.081 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:33.082 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T11:12:33.097 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-10T11:12:33.097 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:33.099 DEBUG:teuthology.orchestra.run.vm00:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T11:12:33.154 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T11:12:33.224 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-10T11:12:33.224 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-10T11:12:33.230 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:33.416 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-10T11:12:33.416 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:33.416 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:33.417 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:33.418 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:33.458 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T11:12:33.458 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T11:12:33.607 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-10T11:12:33.607 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T11:12:33.651 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T11:12:33.654 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:33.667 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T11:12:33.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T11:12:33.667 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T11:12:33.667 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T11:12:33.667 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T11:12:33.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T11:12:33.669 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T11:12:33.670 INFO:teuthology.orchestra.run.vm03.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T11:12:33.683 INFO:teuthology.orchestra.run.vm03.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T11:12:33.696 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T11:12:33.711 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T11:12:33.723 INFO:teuthology.orchestra.run.vm03.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T11:12:33.734 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:12:33.745 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:12:33.755 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:12:33.776 INFO:teuthology.orchestra.run.vm03.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T11:12:33.788 INFO:teuthology.orchestra.run.vm03.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T11:12:33.799 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:33.810 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:33.822 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:33.833 INFO:teuthology.orchestra.run.vm03.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:33.844 INFO:teuthology.orchestra.run.vm03.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T11:12:33.856 INFO:teuthology.orchestra.run.vm03.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T11:12:33.857 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-10T11:12:33.857 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T11:12:33.868 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T11:12:33.879 INFO:teuthology.orchestra.run.vm03.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T11:12:33.902 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T11:12:33.905 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:33.907 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T11:12:33.919 INFO:teuthology.orchestra.run.vm03.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T11:12:33.922 INFO:teuthology.orchestra.run.vm00.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T11:12:33.932 INFO:teuthology.orchestra.run.vm03.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T11:12:33.938 INFO:teuthology.orchestra.run.vm00.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T11:12:33.943 INFO:teuthology.orchestra.run.vm03.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T11:12:33.951 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T11:12:33.955 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T11:12:33.968 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T11:12:33.969 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T11:12:33.980 INFO:teuthology.orchestra.run.vm03.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T11:12:33.983 INFO:teuthology.orchestra.run.vm00.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T11:12:34.099 INFO:teuthology.orchestra.run.vm03.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T11:12:34.102 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:12:34.111 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T11:12:34.116 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:12:34.119 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T11:12:34.130 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T11:12:34.130 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T11:12:34.149 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T11:12:34.152 INFO:teuthology.orchestra.run.vm00.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T11:12:34.162 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T11:12:34.164 INFO:teuthology.orchestra.run.vm00.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T11:12:34.174 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T11:12:34.178 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:34.187 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T11:12:34.192 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:34.203 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T11:12:34.207 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:34.219 INFO:teuthology.orchestra.run.vm00.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T11:12:34.223 INFO:teuthology.orchestra.run.vm03.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T11:12:34.233 INFO:teuthology.orchestra.run.vm00.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T11:12:34.246 INFO:teuthology.orchestra.run.vm00.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T11:12:34.258 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T11:12:34.272 INFO:teuthology.orchestra.run.vm00.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T11:12:34.300 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T11:12:34.315 INFO:teuthology.orchestra.run.vm00.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T11:12:34.325 INFO:teuthology.orchestra.run.vm00.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T11:12:34.335 INFO:teuthology.orchestra.run.vm00.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T11:12:34.345 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T11:12:34.355 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T11:12:34.365 INFO:teuthology.orchestra.run.vm00.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T11:12:34.375 INFO:teuthology.orchestra.run.vm00.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T11:12:34.386 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T11:12:34.393 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T11:12:34.401 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T11:12:34.419 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T11:12:34.429 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T11:12:34.439 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T11:12:34.450 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T11:12:34.462 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T11:12:34.477 INFO:teuthology.orchestra.run.vm00.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T11:12:34.623 INFO:teuthology.orchestra.run.vm03.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T11:12:34.654 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T11:12:34.677 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T11:12:34.733 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T11:12:34.804 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T11:12:34.855 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T11:12:34.871 INFO:teuthology.orchestra.run.vm00.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T11:12:34.904 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T11:12:34.904 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T11:12:34.914 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T11:12:34.929 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T11:12:34.969 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T11:12:34.993 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T11:12:35.042 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T11:12:35.094 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T11:12:35.143 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T11:12:35.154 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T11:12:35.215 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T11:12:35.230 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T11:12:35.281 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T11:12:35.328 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:35.375 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:35.423 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T11:12:35.469 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T11:12:35.484 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T11:12:35.518 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T11:12:35.533 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T11:12:35.567 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:35.580 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T11:12:35.615 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T11:12:35.627 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T11:12:35.667 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T11:12:35.673 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T11:12:35.721 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T11:12:35.726 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T11:12:35.769 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T11:12:35.778 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T11:12:35.816 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T11:12:35.827 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T11:12:35.877 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T11:12:35.932 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T11:12:35.947 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T11:12:35.986 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T11:12:36.010 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T11:12:36.039 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T11:12:36.063 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T11:12:36.095 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T11:12:36.119 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T11:12:36.179 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T11:12:36.240 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T11:12:36.252 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T11:12:36.308 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T11:12:36.310 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T11:12:36.365 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T11:12:36.370 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T11:12:36.422 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T11:12:36.423 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T11:12:36.475 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T11:12:36.479 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T11:12:36.533 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T11:12:36.540 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T11:12:36.585 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T11:12:36.591 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T11:12:36.640 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T11:12:36.647 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T11:12:36.690 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T11:12:36.698 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T11:12:36.749 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T11:12:36.760 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T11:12:36.805 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T11:12:36.812 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T11:12:36.834 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T11:12:36.871 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T11:12:36.889 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T11:12:36.925 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T11:12:36.941 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T11:12:37.087 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T11:12:37.089 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T11:12:37.185 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T11:12:37.186 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T11:12:37.234 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T11:12:37.237 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T11:12:37.265 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T11:12:37.287 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T11:12:37.323 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T11:12:37.341 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T11:12:37.372 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T11:12:37.390 INFO:teuthology.orchestra.run.vm03.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T11:12:37.413 INFO:teuthology.orchestra.run.vm03.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T11:12:37.424 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T11:12:37.477 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T11:12:37.526 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T11:12:37.581 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T11:12:37.635 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T11:12:37.685 INFO:teuthology.orchestra.run.vm00.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T11:12:37.709 INFO:teuthology.orchestra.run.vm00.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T11:12:37.886 INFO:teuthology.orchestra.run.vm03.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T11:12:37.901 INFO:teuthology.orchestra.run.vm03.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T11:12:37.922 INFO:teuthology.orchestra.run.vm03.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T11:12:37.941 INFO:teuthology.orchestra.run.vm03.stdout:Removing zip (3.0-12build2) ... 2026-03-10T11:12:37.968 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:37.981 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:38.028 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T11:12:38.035 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T11:12:38.053 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T11:12:38.181 INFO:teuthology.orchestra.run.vm00.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T11:12:38.195 INFO:teuthology.orchestra.run.vm00.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T11:12:38.217 INFO:teuthology.orchestra.run.vm00.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T11:12:38.238 INFO:teuthology.orchestra.run.vm00.stdout:Removing zip (3.0-12build2) ... 2026-03-10T11:12:38.268 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T11:12:38.281 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T11:12:38.337 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T11:12:38.348 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T11:12:38.366 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T11:12:39.615 INFO:teuthology.orchestra.run.vm03.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T11:12:39.616 INFO:teuthology.orchestra.run.vm03.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T11:12:39.928 INFO:teuthology.orchestra.run.vm00.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T11:12:39.928 INFO:teuthology.orchestra.run.vm00.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T11:12:41.475 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:41.478 DEBUG:teuthology.parallel:result is None 2026-03-10T11:12:41.984 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T11:12:41.986 DEBUG:teuthology.parallel:result is None 2026-03-10T11:12:41.987 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-03-10T11:12:41.987 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-10T11:12:41.987 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T11:12:41.987 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T11:12:41.995 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-10T11:12:42.039 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T11:12:42.308 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T11:12:42.341 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T11:12:42.343 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T11:12:42.376 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T11:12:42.378 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T11:12:42.413 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T11:12:42.556 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T11:12:43.103 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T11:12:43.419 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-10T11:12:43.432 DEBUG:teuthology.parallel:result is None 2026-03-10T11:12:43.972 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T11:12:43.983 DEBUG:teuthology.parallel:result is None 2026-03-10T11:12:43.983 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T11:12:43.986 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T11:12:43.987 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:12:43.988 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-static.241.200. 168.239.11.197 2 u 27 64 77 25.045 -0.042 0.968 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:#x.ns.gin.ntt.ne 129.250.35.222 2 u 26 64 77 20.347 -0.643 0.213 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:#timegoesbrrr.ne 131.188.3.222 2 u 28 64 37 28.735 -2.957 0.320 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:#mail.sassmann.n 192.53.103.103 2 u 29 64 77 23.659 -0.303 0.365 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:*ntp5.kernfusion 237.17.204.95 2 u 32 64 77 28.968 +0.235 0.214 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-ntp2.kernfusion 192.53.103.108 2 u 28 64 77 30.077 +1.797 2.051 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-172-236-195-26. 233.72.92.146 3 u 27 64 77 25.236 -2.424 0.190 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-cluster015.lino 130.149.17.21 2 u 26 64 77 25.072 +0.410 0.432 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-ntp4.lwlcom.net .GPS. 1 u 33 64 77 31.196 +3.405 0.344 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:+ernie.gerger-ne 213.172.96.14 3 u 31 64 77 32.020 +0.180 0.314 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:#139-144-71-56.i 80.192.165.246 2 u 28 64 77 22.754 -3.770 1.449 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:+185.252.140.125 216.239.35.4 2 u 24 64 77 25.073 -0.162 0.218 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-vps-ber1.orlean 127.65.222.189 2 u 28 64 77 28.907 +0.931 0.447 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:#alphyn.canonica 132.163.96.1 2 u 38 64 77 103.637 -1.889 1.351 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:-178.215.228.24 189.97.54.122 2 u 26 64 77 21.991 -0.672 0.310 2026-03-10T11:12:44.357 INFO:teuthology.orchestra.run.vm00.stdout:#185.125.190.58 145.238.80.80 2 u 39 64 77 36.680 -0.763 0.810 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:+static.241.200. 168.239.11.197 2 u 26 64 77 24.988 -0.004 0.217 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:#139-144-71-56.i 80.192.165.246 2 u 24 64 77 22.480 -4.550 0.644 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:+ntp5.kernfusion 237.17.204.95 2 u 30 64 77 28.880 +0.490 0.194 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:#x.ns.gin.ntt.ne 129.250.35.222 2 u 32 64 77 20.316 -0.462 0.219 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:-ntp2.kernfusion 192.53.103.108 2 u 27 64 77 32.224 -0.230 1.445 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:-cluster015.lino 130.149.17.21 2 u 26 64 77 24.986 +1.060 0.222 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:*185.252.140.125 216.239.35.4 2 u 22 64 77 25.047 +0.494 0.284 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:-ntp4.lwlcom.net .GPS. 1 u 31 64 77 30.878 +3.579 0.120 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:-104-167-24-26.l 185.232.69.65 2 u 27 64 77 26.456 -2.820 0.271 2026-03-10T11:12:44.452 INFO:teuthology.orchestra.run.vm03.stdout:-vps-ber1.orlean 127.65.222.189 2 u 25 64 77 28.839 +0.782 0.183 2026-03-10T11:12:44.453 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.58 145.238.80.80 2 u 36 64 77 35.217 -1.533 0.385 2026-03-10T11:12:44.453 INFO:teuthology.orchestra.run.vm03.stdout:-172-236-195-26. 233.72.92.146 3 u 29 64 77 23.930 -1.383 0.142 2026-03-10T11:12:44.453 INFO:teuthology.orchestra.run.vm03.stdout:#alphyn.canonica 132.163.96.1 2 u 31 64 77 102.151 -2.468 0.292 2026-03-10T11:12:44.453 INFO:teuthology.orchestra.run.vm03.stdout:-178.215.228.24 189.97.54.122 2 u 23 64 77 21.880 -0.663 0.233 2026-03-10T11:12:44.453 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.56 79.243.60.50 2 u 39 64 77 35.243 -0.761 0.304 2026-03-10T11:12:44.453 INFO:teuthology.orchestra.run.vm03.stdout:#185.125.190.57 194.121.207.249 2 u 34 64 77 33.252 +1.387 0.243 2026-03-10T11:12:44.453 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T11:12:44.455 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T11:12:44.455 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T11:12:44.458 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T11:12:44.460 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T11:12:44.462 INFO:teuthology.task.internal:Duration was 521.806372 seconds 2026-03-10T11:12:44.462 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T11:12:44.464 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T11:12:44.464 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T11:12:44.465 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T11:12:44.488 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T11:12:44.488 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-10T11:12:44.488 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T11:12:44.538 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-10T11:12:44.538 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T11:12:44.550 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T11:12:44.550 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:12:44.579 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:12:44.623 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T11:12:44.623 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:12:44.655 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:12:44.661 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:12:44.662 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:12:44.662 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T11:12:44.662 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:12:44.663 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T11:12:44.668 INFO:teuthology.orchestra.run.vm00.stderr: 88.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T11:12:44.672 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:12:44.673 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:12:44.673 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gzgzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:12:44.673 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-10T11:12:44.673 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T11:12:44.679 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 87.7% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T11:12:44.680 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T11:12:44.683 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T11:12:44.683 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T11:12:44.718 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T11:12:44.729 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T11:12:44.731 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:12:44.759 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:12:44.765 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-10T11:12:44.779 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-10T11:12:44.787 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:12:44.816 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:12:44.817 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:12:44.831 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:12:44.831 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T11:12:44.833 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T11:12:44.834 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009/remote/vm00 2026-03-10T11:12:44.834 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T11:12:44.866 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1009/remote/vm03 2026-03-10T11:12:44.866 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T11:12:44.880 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T11:12:44.880 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T11:12:44.907 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T11:12:44.923 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T11:12:44.926 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T11:12:44.926 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T11:12:44.928 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T11:12:44.928 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T11:12:44.951 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T11:12:44.954 INFO:teuthology.orchestra.run.vm00.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 11:12 /home/ubuntu/cephtest 2026-03-10T11:12:44.967 INFO:teuthology.orchestra.run.vm03.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 11:12 /home/ubuntu/cephtest 2026-03-10T11:12:44.968 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T11:12:44.974 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_ca_signed_key} duration: 521.8063721656799 flavor: default owner: kyr success: true 2026-03-10T11:12:44.974 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T11:12:44.999 INFO:teuthology.run:pass