2026-03-10T06:57:18.240 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T06:57:18.244 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T06:57:18.262 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938 branch: squid description: orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_ca_signed_key} email: null first_in_suite: false flavor: default job_id: '938' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: true mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: use-ca-signed-key: true install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:logrotate_t:s0 - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - client.0 - - host.b - mon.b - mgr.b - osd.1 - client.1 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJNM1bfzi50wamfabNZfEXEqrNna+nQLxVMKC5uOgq1nzH8HzwjvEFEXftqf0+SXlit9Dg8fI2liTFzuGTlNPT8= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBqM81Ixr5/7wD5zUHh3bnf74O+9feIhGtFbP88kboroqv3ruVM14Fwj4XQ6S1JCd5MuEHUmTikDhhJxIDLYZn4= tasks: - pexec: all: - sudo dnf remove nvme-cli -y - sudo dnf install nvmetcli nvme-cli -y - install: null - cephadm: null - cephadm.shell: host.a: - "set -ex\nHOSTNAMES=$(ceph orch host ls --format json | jq -r '.[] | .hostname')\n\ for host in $HOSTNAMES; do\n # do a check-host on each host to make sure it's\ \ reachable\n ceph cephadm check-host ${host} 2> ${host}-ok.txt\n HOST_OK=$(cat\ \ ${host}-ok.txt)\n if ! grep -q \"Host looks OK\" <<< \"$HOST_OK\"; then\n\ \ printf \"Failed host check:\\n\\n$HOST_OK\"\n exit 1\n fi\ndone\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T06:57:18.262 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T06:57:18.262 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T06:57:18.262 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T06:57:18.263 INFO:teuthology.task.internal:Checking packages... 2026-03-10T06:57:18.263 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T06:57:18.263 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T06:57:18.263 INFO:teuthology.packaging:ref: None 2026-03-10T06:57:18.263 INFO:teuthology.packaging:tag: None 2026-03-10T06:57:18.263 INFO:teuthology.packaging:branch: squid 2026-03-10T06:57:18.263 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:57:18.263 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-10T06:57:19.029 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-10T06:57:19.030 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T06:57:19.030 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T06:57:19.030 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T06:57:19.031 INFO:teuthology.task.internal:Saving configuration 2026-03-10T06:57:19.035 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T06:57:19.035 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T06:57:19.041 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 06:56:16.620596', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJNM1bfzi50wamfabNZfEXEqrNna+nQLxVMKC5uOgq1nzH8HzwjvEFEXftqf0+SXlit9Dg8fI2liTFzuGTlNPT8='} 2026-03-10T06:57:19.047 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 06:56:16.621025', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBqM81Ixr5/7wD5zUHh3bnf74O+9feIhGtFbP88kboroqv3ruVM14Fwj4XQ6S1JCd5MuEHUmTikDhhJxIDLYZn4='} 2026-03-10T06:57:19.047 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T06:57:19.048 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0', 'client.0'] 2026-03-10T06:57:19.048 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1', 'client.1'] 2026-03-10T06:57:19.048 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T06:57:19.053 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-10T06:57:19.059 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-10T06:57:19.059 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f623a9abd90>, signals=[15]) 2026-03-10T06:57:19.059 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T06:57:19.060 INFO:teuthology.task.internal:Opening connections... 2026-03-10T06:57:19.060 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-10T06:57:19.060 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:57:19.119 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-10T06:57:19.119 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:57:19.179 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T06:57:19.180 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-10T06:57:19.219 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-10T06:57:19.219 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:NAME="CentOS Stream" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="9" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:ID="centos" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE="rhel fedora" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="9" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:PLATFORM_ID="platform:el9" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:ANSI_COLOR="0;31" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:LOGO="fedora-logo-icon" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://centos.org/" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T06:57:19.273 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T06:57:19.273 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-10T06:57:19.277 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-10T06:57:19.291 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-10T06:57:19.291 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:NAME="CentOS Stream" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="9" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:ID="centos" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE="rhel fedora" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="9" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:PLATFORM_ID="platform:el9" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:ANSI_COLOR="0;31" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:LOGO="fedora-logo-icon" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://centos.org/" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-10T06:57:19.345 INFO:teuthology.orchestra.run.vm08.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-10T06:57:19.346 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-10T06:57:19.350 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T06:57:19.351 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T06:57:19.352 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T06:57:19.352 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-10T06:57:19.353 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-10T06:57:19.399 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T06:57:19.401 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T06:57:19.401 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-10T06:57:19.407 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-10T06:57:19.419 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T06:57:19.455 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T06:57:19.455 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T06:57:19.462 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-10T06:57:19.475 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:57:19.660 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-10T06:57:19.675 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:57:19.854 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T06:57:19.856 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T06:57:19.856 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T06:57:19.858 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T06:57:19.873 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T06:57:19.875 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T06:57:19.876 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T06:57:19.876 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T06:57:19.914 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T06:57:19.931 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T06:57:19.933 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T06:57:19.933 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T06:57:19.983 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:57:19.983 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T06:57:19.997 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T06:57:19.997 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T06:57:20.026 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T06:57:20.050 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:57:20.059 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:57:20.064 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:57:20.074 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T06:57:20.075 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T06:57:20.077 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T06:57:20.077 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T06:57:20.102 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T06:57:20.138 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T06:57:20.140 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T06:57:20.141 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T06:57:20.165 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T06:57:20.192 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:57:20.240 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:57:20.296 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:57:20.296 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T06:57:20.356 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T06:57:20.377 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T06:57:20.432 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:57:20.432 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T06:57:20.490 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-10T06:57:20.492 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-10T06:57:20.517 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:57:20.556 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T06:57:20.800 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T06:57:20.802 INFO:teuthology.task.internal:Starting timer... 2026-03-10T06:57:20.802 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T06:57:20.804 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T06:57:20.806 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:logrotate_t:s0', 'scontext=system_u:system_r:getty_t:s0']} 2026-03-10T06:57:20.806 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-10T06:57:20.806 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-10T06:57:20.806 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T06:57:20.806 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T06:57:20.806 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T06:57:20.807 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T06:57:20.808 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T06:57:20.808 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T06:57:20.809 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T06:57:21.495 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T06:57:21.500 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T06:57:21.500 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryy8cj9ti4 --limit vm01.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T06:58:58.915 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm08.local')] 2026-03-10T06:58:58.915 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-10T06:58:58.916 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:58:58.984 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-10T06:58:59.063 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-10T06:58:59.063 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-10T06:58:59.063 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T06:58:59.126 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-10T06:58:59.205 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-10T06:58:59.205 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T06:58:59.208 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T06:58:59.208 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T06:58:59.208 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:58:59.210 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T06:58:59.210 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T06:58:59.251 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T06:58:59.267 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T06:58:59.287 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-10T06:58:59.308 INFO:teuthology.orchestra.run.vm08.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-10T06:58:59.309 INFO:teuthology.orchestra.run.vm01.stderr:sudo: ntpd: command not found 2026-03-10T06:58:59.329 INFO:teuthology.orchestra.run.vm01.stdout:506 Cannot talk to daemon 2026-03-10T06:58:59.350 INFO:teuthology.orchestra.run.vm08.stderr:sudo: ntpd: command not found 2026-03-10T06:58:59.352 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T06:58:59.372 INFO:teuthology.orchestra.run.vm08.stdout:506 Cannot talk to daemon 2026-03-10T06:58:59.374 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T06:58:59.393 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-10T06:58:59.415 INFO:teuthology.orchestra.run.vm08.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-10T06:58:59.430 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:58:59.472 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T06:58:59.527 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:58:59.527 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T06:58:59.527 INFO:teuthology.orchestra.run.vm01.stdout:^? 141.84.43.75 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.527 INFO:teuthology.orchestra.run.vm01.stdout:^? stratum2-2.NTP.TechFak.N> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.527 INFO:teuthology.orchestra.run.vm01.stdout:^? vps-fra1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.527 INFO:teuthology.orchestra.run.vm01.stdout:^? time.cloudflare.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.528 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T06:58:59.528 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T06:58:59.528 INFO:teuthology.orchestra.run.vm08.stdout:^? vps-fra1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.528 INFO:teuthology.orchestra.run.vm08.stdout:^? time.cloudflare.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.528 INFO:teuthology.orchestra.run.vm08.stdout:^? 141.84.43.75 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.528 INFO:teuthology.orchestra.run.vm08.stdout:^? stratum2-2.NTP.TechFak.N> 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-10T06:58:59.528 INFO:teuthology.run_tasks:Running task pexec... 2026-03-10T06:58:59.531 INFO:teuthology.task.pexec:Executing custom commands... 2026-03-10T06:58:59.531 DEBUG:teuthology.orchestra.run.vm01:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T06:58:59.532 DEBUG:teuthology.orchestra.run.vm08:> TESTDIR=/home/ubuntu/cephtest bash -s 2026-03-10T06:58:59.570 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf remove nvme-cli -y 2026-03-10T06:58:59.570 DEBUG:teuthology.task.pexec:ubuntu@vm01.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:58:59.570 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm01.local 2026-03-10T06:58:59.570 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T06:58:59.570 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:58:59.572 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf remove nvme-cli -y 2026-03-10T06:58:59.572 DEBUG:teuthology.task.pexec:ubuntu@vm08.local< sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:58:59.572 INFO:teuthology.task.pexec:Running commands on host ubuntu@vm08.local 2026-03-10T06:58:59.572 INFO:teuthology.task.pexec:sudo dnf remove nvme-cli -y 2026-03-10T06:58:59.572 INFO:teuthology.task.pexec:sudo dnf install nvmetcli nvme-cli -y 2026-03-10T06:58:59.810 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: nvme-cli 2026-03-10T06:58:59.810 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T06:58:59.813 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:58:59.813 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T06:58:59.813 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:58:59.825 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: nvme-cli 2026-03-10T06:58:59.825 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T06:58:59.829 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:58:59.830 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T06:58:59.830 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:59:00.315 INFO:teuthology.orchestra.run.vm01.stdout:Last metadata expiration check: 0:01:06 ago on Tue 10 Mar 2026 06:57:54 AM UTC. 2026-03-10T06:59:00.383 INFO:teuthology.orchestra.run.vm08.stdout:Last metadata expiration check: 0:01:06 ago on Tue 10 Mar 2026 06:57:54 AM UTC. 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: Package Architecture Version Repository Size 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout:Install 6 Packages 2026-03-10T06:59:00.427 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:00.428 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 2.3 M 2026-03-10T06:59:00.428 INFO:teuthology.orchestra.run.vm01.stdout:Installed size: 11 M 2026-03-10T06:59:00.428 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T06:59:00.493 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: Package Architecture Version Repository Size 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli x86_64 2.16-1.el9 baseos 1.2 M 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: nvmetcli noarch 0.8-3.el9 baseos 44 k 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell noarch 1:1.1.30-1.el9 baseos 72 k 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: python3-kmod x86_64 0.9-32.el9 baseos 84 k 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: python3-urwid x86_64 2.1.2-4.el9 baseos 837 k 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:Install 6 Packages 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:00.494 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 2.3 M 2026-03-10T06:59:00.495 INFO:teuthology.orchestra.run.vm08.stdout:Installed size: 11 M 2026-03-10T06:59:00.495 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T06:59:00.667 INFO:teuthology.orchestra.run.vm01.stdout:(1/6): nvmetcli-0.8-3.el9.noarch.rpm 310 kB/s | 44 kB 00:00 2026-03-10T06:59:00.696 INFO:teuthology.orchestra.run.vm01.stdout:(2/6): python3-configshell-1.1.30-1.el9.noarch. 423 kB/s | 72 kB 00:00 2026-03-10T06:59:00.775 INFO:teuthology.orchestra.run.vm01.stdout:(3/6): python3-kmod-0.9-32.el9.x86_64.rpm 783 kB/s | 84 kB 00:00 2026-03-10T06:59:00.801 INFO:teuthology.orchestra.run.vm01.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 1.4 MB/s | 150 kB 00:00 2026-03-10T06:59:00.938 INFO:teuthology.orchestra.run.vm01.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 2.8 MB/s | 1.2 MB 00:00 2026-03-10T06:59:00.986 INFO:teuthology.orchestra.run.vm08.stdout:(1/6): python3-configshell-1.1.30-1.el9.noarch. 2.0 MB/s | 72 kB 00:00 2026-03-10T06:59:01.004 INFO:teuthology.orchestra.run.vm08.stdout:(2/6): python3-kmod-0.9-32.el9.x86_64.rpm 4.9 MB/s | 84 kB 00:00 2026-03-10T06:59:01.012 INFO:teuthology.orchestra.run.vm08.stdout:(3/6): nvmetcli-0.8-3.el9.noarch.rpm 712 kB/s | 44 kB 00:00 2026-03-10T06:59:01.029 INFO:teuthology.orchestra.run.vm01.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 3.2 MB/s | 837 kB 00:00 2026-03-10T06:59:01.029 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:59:01.029 INFO:teuthology.orchestra.run.vm01.stdout:Total 3.8 MB/s | 2.3 MB 00:00 2026-03-10T06:59:01.032 INFO:teuthology.orchestra.run.vm08.stdout:(4/6): python3-pyparsing-2.4.7-9.el9.noarch.rpm 5.2 MB/s | 150 kB 00:00 2026-03-10T06:59:01.037 INFO:teuthology.orchestra.run.vm08.stdout:(5/6): nvme-cli-2.16-1.el9.x86_64.rpm 13 MB/s | 1.2 MB 00:00 2026-03-10T06:59:01.119 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:59:01.121 INFO:teuthology.orchestra.run.vm08.stdout:(6/6): python3-urwid-2.1.2-4.el9.x86_64.rpm 7.5 MB/s | 837 kB 00:00 2026-03-10T06:59:01.121 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:59:01.121 INFO:teuthology.orchestra.run.vm08.stdout:Total 3.7 MB/s | 2.3 MB 00:00 2026-03-10T06:59:01.126 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:59:01.126 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:59:01.190 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:59:01.193 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:59:01.194 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:59:01.200 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:59:01.200 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:59:01.263 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:59:01.264 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:59:01.393 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:59:01.408 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T06:59:01.424 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T06:59:01.436 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:59:01.446 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:59:01.448 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:59:01.471 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:59:01.485 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urwid-2.1.2-4.el9.x86_64 1/6 2026-03-10T06:59:01.501 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 2/6 2026-03-10T06:59:01.513 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:59:01.528 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:59:01.531 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:59:01.638 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:59:01.646 INFO:teuthology.orchestra.run.vm01.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:59:01.741 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvmetcli-0.8-3.el9.noarch 5/6 2026-03-10T06:59:01.747 INFO:teuthology.orchestra.run.vm08.stdout: Installing : nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:59:02.070 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:59:02.070 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T06:59:02.070 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:02.137 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: nvme-cli-2.16-1.el9.x86_64 6/6 2026-03-10T06:59:02.137 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /usr/lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T06:59:02.137 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvme-cli-2.16-1.el9.x86_64 1/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : nvmetcli-0.8-3.el9.noarch 2/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-configshell-1:1.1.30-1.el9.noarch 3/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kmod-0.9-32.el9.x86_64 4/6 2026-03-10T06:59:02.794 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 5/6 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:02.895 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urwid-2.1.2-4.el9.x86_64 6/6 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout: nvme-cli-2.16-1.el9.x86_64 nvmetcli-0.8-3.el9.noarch 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout: python3-configshell-1:1.1.30-1.el9.noarch python3-kmod-0.9-32.el9.x86_64 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyparsing-2.4.7-9.el9.noarch python3-urwid-2.1.2-4.el9.x86_64 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:02.898 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T06:59:02.965 DEBUG:teuthology.parallel:result is None 2026-03-10T06:59:02.974 DEBUG:teuthology.parallel:result is None 2026-03-10T06:59:02.974 INFO:teuthology.run_tasks:Running task install... 2026-03-10T06:59:02.976 DEBUG:teuthology.task.install:project ceph 2026-03-10T06:59:02.976 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T06:59:02.976 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T06:59:02.976 INFO:teuthology.task.install:Using flavor: default 2026-03-10T06:59:02.978 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T06:59:02.978 INFO:teuthology.task.install:extra packages: [] 2026-03-10T06:59:02.978 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T06:59:02.978 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:59:02.979 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-10T06:59:02.979 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T06:59:03.579 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T06:59:03.579 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T06:59:03.613 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-10T06:59:03.613 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-10T06:59:04.092 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T06:59:04.092 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T06:59:04.092 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T06:59:04.130 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T06:59:04.130 DEBUG:teuthology.orchestra.run.vm08:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T06:59:04.170 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-10T06:59:04.170 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T06:59:04.170 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-10T06:59:04.208 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-10T06:59:04.208 DEBUG:teuthology.orchestra.run.vm01:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-10T06:59:04.211 DEBUG:teuthology.orchestra.run.vm08:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T06:59:04.284 DEBUG:teuthology.orchestra.run.vm01:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-10T06:59:04.301 DEBUG:teuthology.orchestra.run.vm08:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:59:04.335 INFO:teuthology.orchestra.run.vm08.stdout:check_obsoletes = 1 2026-03-10T06:59:04.336 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T06:59:04.380 DEBUG:teuthology.orchestra.run.vm01:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-10T06:59:04.419 INFO:teuthology.orchestra.run.vm01.stdout:check_obsoletes = 1 2026-03-10T06:59:04.420 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-10T06:59:04.564 INFO:teuthology.orchestra.run.vm08.stdout:41 files removed 2026-03-10T06:59:04.600 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T06:59:04.642 INFO:teuthology.orchestra.run.vm01.stdout:41 files removed 2026-03-10T06:59:04.674 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-10T06:59:06.020 INFO:teuthology.orchestra.run.vm08.stdout:ceph packages for x86_64 70 kB/s | 84 kB 00:01 2026-03-10T06:59:06.082 INFO:teuthology.orchestra.run.vm01.stdout:ceph packages for x86_64 71 kB/s | 84 kB 00:01 2026-03-10T06:59:06.986 INFO:teuthology.orchestra.run.vm08.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T06:59:07.065 INFO:teuthology.orchestra.run.vm01.stdout:ceph noarch packages 12 kB/s | 12 kB 00:00 2026-03-10T06:59:07.960 INFO:teuthology.orchestra.run.vm08.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T06:59:08.052 INFO:teuthology.orchestra.run.vm01.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-10T06:59:09.226 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - BaseOS 7.7 MB/s | 8.9 MB 00:01 2026-03-10T06:59:09.604 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - BaseOS 5.5 MB/s | 8.9 MB 00:01 2026-03-10T06:59:11.496 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - AppStream 22 MB/s | 27 MB 00:01 2026-03-10T06:59:12.476 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - AppStream 11 MB/s | 27 MB 00:02 2026-03-10T06:59:15.665 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - CRB 6.1 MB/s | 8.0 MB 00:01 2026-03-10T06:59:16.735 INFO:teuthology.orchestra.run.vm08.stdout:CentOS Stream 9 - Extras packages 99 kB/s | 20 kB 00:00 2026-03-10T06:59:16.933 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - CRB 5.8 MB/s | 8.0 MB 00:01 2026-03-10T06:59:18.457 INFO:teuthology.orchestra.run.vm08.stdout:Extra Packages for Enterprise Linux 12 MB/s | 20 MB 00:01 2026-03-10T06:59:18.647 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - Extras packages 24 kB/s | 20 kB 00:00 2026-03-10T06:59:19.803 INFO:teuthology.orchestra.run.vm01.stdout:Extra Packages for Enterprise Linux 19 MB/s | 20 MB 00:01 2026-03-10T06:59:23.080 INFO:teuthology.orchestra.run.vm08.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-10T06:59:24.407 INFO:teuthology.orchestra.run.vm01.stdout:lab-extras 63 kB/s | 50 kB 00:00 2026-03-10T06:59:24.483 INFO:teuthology.orchestra.run.vm08.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:59:24.484 INFO:teuthology.orchestra.run.vm08.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:59:24.488 INFO:teuthology.orchestra.run.vm08.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T06:59:24.488 INFO:teuthology.orchestra.run.vm08.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T06:59:24.518 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout:Installing: 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout:Upgrading: 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout:Installing dependencies: 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T06:59:24.524 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T06:59:24.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T06:59:24.526 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:Installing weak dependencies: 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:====================================================================================== 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:Install 134 Packages 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:Upgrade 2 Packages 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:Total download size: 210 M 2026-03-10T06:59:24.527 INFO:teuthology.orchestra.run.vm08.stdout:Downloading Packages: 2026-03-10T06:59:25.782 INFO:teuthology.orchestra.run.vm08.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-10T06:59:25.793 INFO:teuthology.orchestra.run.vm01.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:59:25.793 INFO:teuthology.orchestra.run.vm01.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-10T06:59:25.799 INFO:teuthology.orchestra.run.vm01.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-10T06:59:25.800 INFO:teuthology.orchestra.run.vm01.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-10T06:59:25.828 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-10T06:59:25.832 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout:Upgrading: 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-10T06:59:25.833 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-10T06:59:25.834 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:Installing weak dependencies: 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:Install 134 Packages 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:Upgrade 2 Packages 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 210 M 2026-03-10T06:59:25.835 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-10T06:59:26.581 INFO:teuthology.orchestra.run.vm08.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T06:59:26.697 INFO:teuthology.orchestra.run.vm08.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-10T06:59:26.888 INFO:teuthology.orchestra.run.vm08.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.5 MB/s | 5.5 MB 00:01 2026-03-10T06:59:27.031 INFO:teuthology.orchestra.run.vm08.stdout:(5/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 7.5 MB/s | 1.1 MB 00:00 2026-03-10T06:59:27.045 INFO:teuthology.orchestra.run.vm08.stdout:(6/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 7.0 MB/s | 2.4 MB 00:00 2026-03-10T06:59:27.489 INFO:teuthology.orchestra.run.vm08.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 10 MB/s | 4.7 MB 00:00 2026-03-10T06:59:27.821 INFO:teuthology.orchestra.run.vm01.stdout:(1/136): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 15 kB/s | 6.5 kB 00:00 2026-03-10T06:59:28.108 INFO:teuthology.orchestra.run.vm08.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 7.8 MB/s | 22 MB 00:02 2026-03-10T06:59:28.229 INFO:teuthology.orchestra.run.vm08.stdout:(9/136): ceph-selinux-19.2.3-678.ge911bdeb.el9. 207 kB/s | 25 kB 00:00 2026-03-10T06:59:28.294 INFO:teuthology.orchestra.run.vm08.stdout:(10/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9 13 MB/s | 11 MB 00:00 2026-03-10T06:59:28.389 INFO:teuthology.orchestra.run.vm08.stdout:(11/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 13 MB/s | 17 MB 00:01 2026-03-10T06:59:28.405 INFO:teuthology.orchestra.run.vm08.stdout:(12/136): libcephfs-devel-19.2.3-678.ge911bdeb. 302 kB/s | 34 kB 00:00 2026-03-10T06:59:28.516 INFO:teuthology.orchestra.run.vm08.stdout:(13/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 7.7 MB/s | 1.0 MB 00:00 2026-03-10T06:59:28.518 INFO:teuthology.orchestra.run.vm08.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.4 MB/s | 163 kB 00:00 2026-03-10T06:59:28.621 INFO:teuthology.orchestra.run.vm01.stdout:(2/136): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-10T06:59:28.635 INFO:teuthology.orchestra.run.vm08.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-10T06:59:28.636 INFO:teuthology.orchestra.run.vm08.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 4.2 MB/s | 503 kB 00:00 2026-03-10T06:59:28.734 INFO:teuthology.orchestra.run.vm01.stdout:(3/136): ceph-immutable-object-cache-19.2.3-678 1.3 MB/s | 145 kB 00:00 2026-03-10T06:59:28.769 INFO:teuthology.orchestra.run.vm08.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 341 kB/s | 45 kB 00:00 2026-03-10T06:59:28.903 INFO:teuthology.orchestra.run.vm08.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.0 MB/s | 142 kB 00:00 2026-03-10T06:59:29.034 INFO:teuthology.orchestra.run.vm08.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.2 MB/s | 165 kB 00:00 2026-03-10T06:59:29.057 INFO:teuthology.orchestra.run.vm08.stdout:(20/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 13 MB/s | 5.4 MB 00:00 2026-03-10T06:59:29.057 INFO:teuthology.orchestra.run.vm01.stdout:(4/136): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.3 MB/s | 5.5 MB 00:01 2026-03-10T06:59:29.090 INFO:teuthology.orchestra.run.vm01.stdout:(5/136): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 6.8 MB/s | 2.4 MB 00:00 2026-03-10T06:59:29.147 INFO:teuthology.orchestra.run.vm08.stdout:(21/136): python3-rados-19.2.3-678.ge911bdeb.el 2.8 MB/s | 323 kB 00:00 2026-03-10T06:59:29.176 INFO:teuthology.orchestra.run.vm08.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 303 kB 00:00 2026-03-10T06:59:29.196 INFO:teuthology.orchestra.run.vm01.stdout:(6/136): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 7.7 MB/s | 1.1 MB 00:00 2026-03-10T06:59:29.260 INFO:teuthology.orchestra.run.vm08.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 885 kB/s | 100 kB 00:00 2026-03-10T06:59:29.292 INFO:teuthology.orchestra.run.vm08.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 738 kB/s | 85 kB 00:00 2026-03-10T06:59:29.440 INFO:teuthology.orchestra.run.vm08.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.1 MB/s | 171 kB 00:00 2026-03-10T06:59:29.564 INFO:teuthology.orchestra.run.vm08.stdout:(26/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 10 MB/s | 3.1 MB 00:00 2026-03-10T06:59:29.571 INFO:teuthology.orchestra.run.vm08.stdout:(27/136): ceph-grafana-dashboards-19.2.3-678.ge 239 kB/s | 31 kB 00:00 2026-03-10T06:59:29.649 INFO:teuthology.orchestra.run.vm01.stdout:(7/136): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 8.5 MB/s | 4.7 MB 00:00 2026-03-10T06:59:29.676 INFO:teuthology.orchestra.run.vm08.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.3 MB/s | 150 kB 00:00 2026-03-10T06:59:30.050 INFO:teuthology.orchestra.run.vm08.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 7.9 MB/s | 3.8 MB 00:00 2026-03-10T06:59:30.173 INFO:teuthology.orchestra.run.vm08.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.0 MB/s | 253 kB 00:00 2026-03-10T06:59:30.289 INFO:teuthology.orchestra.run.vm08.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 425 kB/s | 49 kB 00:00 2026-03-10T06:59:30.404 INFO:teuthology.orchestra.run.vm08.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 146 kB/s | 17 kB 00:00 2026-03-10T06:59:30.524 INFO:teuthology.orchestra.run.vm08.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 299 kB 00:00 2026-03-10T06:59:30.595 INFO:teuthology.orchestra.run.vm08.stdout:(34/136): ceph-mgr-diskprediction-local-19.2.3- 8.0 MB/s | 7.4 MB 00:00 2026-03-10T06:59:30.650 INFO:teuthology.orchestra.run.vm08.stdout:(35/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 6.0 MB/s | 769 kB 00:00 2026-03-10T06:59:30.814 INFO:teuthology.orchestra.run.vm01.stdout:(8/136): ceph-common-19.2.3-678.ge911bdeb.el9.x 6.3 MB/s | 22 MB 00:03 2026-03-10T06:59:30.844 INFO:teuthology.orchestra.run.vm01.stdout:(9/136): ceph-radosgw-19.2.3-678.ge911bdeb.el9. 9.0 MB/s | 11 MB 00:01 2026-03-10T06:59:30.844 INFO:teuthology.orchestra.run.vm08.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 208 kB/s | 40 kB 00:00 2026-03-10T06:59:30.880 INFO:teuthology.orchestra.run.vm08.stdout:(37/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.2 MB/s | 351 kB 00:00 2026-03-10T06:59:30.890 INFO:teuthology.orchestra.run.vm08.stdout:(38/136): libconfig-1.7.2-9.el9.x86_64.rpm 1.5 MB/s | 72 kB 00:00 2026-03-10T06:59:30.931 INFO:teuthology.orchestra.run.vm01.stdout:(10/136): ceph-selinux-19.2.3-678.ge911bdeb.el9 214 kB/s | 25 kB 00:00 2026-03-10T06:59:30.969 INFO:teuthology.orchestra.run.vm08.stdout:(39/136): libgfortran-11.5.0-14.el9.x86_64.rpm 8.7 MB/s | 794 kB 00:00 2026-03-10T06:59:30.976 INFO:teuthology.orchestra.run.vm08.stdout:(40/136): libquadmath-11.5.0-14.el9.x86_64.rpm 2.1 MB/s | 184 kB 00:00 2026-03-10T06:59:31.003 INFO:teuthology.orchestra.run.vm08.stdout:(41/136): mailcap-2.1.49-5.el9.noarch.rpm 988 kB/s | 33 kB 00:00 2026-03-10T06:59:31.014 INFO:teuthology.orchestra.run.vm08.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 2.4 MB/s | 93 kB 00:00 2026-03-10T06:59:31.047 INFO:teuthology.orchestra.run.vm08.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 5.7 MB/s | 253 kB 00:00 2026-03-10T06:59:31.049 INFO:teuthology.orchestra.run.vm01.stdout:(11/136): libcephfs-devel-19.2.3-678.ge911bdeb. 285 kB/s | 34 kB 00:00 2026-03-10T06:59:31.093 INFO:teuthology.orchestra.run.vm08.stdout:(44/136): python3-cryptography-36.0.1-5.el9.x86 16 MB/s | 1.2 MB 00:00 2026-03-10T06:59:31.094 INFO:teuthology.orchestra.run.vm08.stdout:(45/136): python3-ply-3.11-14.el9.noarch.rpm 2.2 MB/s | 106 kB 00:00 2026-03-10T06:59:31.133 INFO:teuthology.orchestra.run.vm08.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 3.3 MB/s | 135 kB 00:00 2026-03-10T06:59:31.134 INFO:teuthology.orchestra.run.vm08.stdout:(47/136): python3-requests-2.25.1-10.el9.noarch 3.1 MB/s | 126 kB 00:00 2026-03-10T06:59:31.176 INFO:teuthology.orchestra.run.vm08.stdout:(48/136): unzip-6.0-59.el9.x86_64.rpm 4.3 MB/s | 182 kB 00:00 2026-03-10T06:59:31.181 INFO:teuthology.orchestra.run.vm08.stdout:(49/136): python3-urllib3-1.26.5-7.el9.noarch.r 4.5 MB/s | 218 kB 00:00 2026-03-10T06:59:31.221 INFO:teuthology.orchestra.run.vm08.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 5.9 MB/s | 266 kB 00:00 2026-03-10T06:59:31.285 INFO:teuthology.orchestra.run.vm01.stdout:(12/136): libcephfs2-19.2.3-678.ge911bdeb.el9.x 4.1 MB/s | 1.0 MB 00:00 2026-03-10T06:59:31.348 INFO:teuthology.orchestra.run.vm01.stdout:(13/136): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 7.9 MB/s | 17 MB 00:02 2026-03-10T06:59:31.375 INFO:teuthology.orchestra.run.vm08.stdout:(51/136): boost-program-options-1.75.0-13.el9.x 537 kB/s | 104 kB 00:00 2026-03-10T06:59:31.404 INFO:teuthology.orchestra.run.vm01.stdout:(14/136): libcephsqlite-19.2.3-678.ge911bdeb.el 1.3 MB/s | 163 kB 00:00 2026-03-10T06:59:31.465 INFO:teuthology.orchestra.run.vm01.stdout:(15/136): librados-devel-19.2.3-678.ge911bdeb.e 1.1 MB/s | 127 kB 00:00 2026-03-10T06:59:31.526 INFO:teuthology.orchestra.run.vm01.stdout:(16/136): libradosstriper1-19.2.3-678.ge911bdeb 4.0 MB/s | 503 kB 00:00 2026-03-10T06:59:31.536 INFO:teuthology.orchestra.run.vm08.stdout:(52/136): flexiblas-3.0.4-9.el9.x86_64.rpm 94 kB/s | 30 kB 00:00 2026-03-10T06:59:31.646 INFO:teuthology.orchestra.run.vm01.stdout:(17/136): python3-ceph-argparse-19.2.3-678.ge91 378 kB/s | 45 kB 00:00 2026-03-10T06:59:31.664 INFO:teuthology.orchestra.run.vm08.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 116 kB/s | 15 kB 00:00 2026-03-10T06:59:31.721 INFO:teuthology.orchestra.run.vm08.stdout:(54/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 8.7 MB/s | 3.0 MB 00:00 2026-03-10T06:59:31.755 INFO:teuthology.orchestra.run.vm08.stdout:(55/136): libnbd-1.20.3-4.el9.x86_64.rpm 1.7 MB/s | 164 kB 00:00 2026-03-10T06:59:31.769 INFO:teuthology.orchestra.run.vm01.stdout:(18/136): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-10T06:59:31.889 INFO:teuthology.orchestra.run.vm01.stdout:(19/136): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-10T06:59:32.037 INFO:teuthology.orchestra.run.vm01.stdout:(20/136): python3-rados-19.2.3-678.ge911bdeb.el 2.1 MB/s | 323 kB 00:00 2026-03-10T06:59:32.038 INFO:teuthology.orchestra.run.vm08.stdout:(56/136): ceph-test-19.2.3-678.ge911bdeb.el9.x8 13 MB/s | 50 MB 00:03 2026-03-10T06:59:32.039 INFO:teuthology.orchestra.run.vm08.stdout:(57/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 503 kB/s | 160 kB 00:00 2026-03-10T06:59:32.049 INFO:teuthology.orchestra.run.vm08.stdout:(58/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 154 kB/s | 45 kB 00:00 2026-03-10T06:59:32.169 INFO:teuthology.orchestra.run.vm01.stdout:(21/136): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 7.7 MB/s | 5.4 MB 00:00 2026-03-10T06:59:32.171 INFO:teuthology.orchestra.run.vm01.stdout:(22/136): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.2 MB/s | 303 kB 00:00 2026-03-10T06:59:32.217 INFO:teuthology.orchestra.run.vm08.stdout:(59/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 1.4 MB/s | 246 kB 00:00 2026-03-10T06:59:32.284 INFO:teuthology.orchestra.run.vm01.stdout:(23/136): python3-rgw-19.2.3-678.ge911bdeb.el9. 869 kB/s | 100 kB 00:00 2026-03-10T06:59:32.289 INFO:teuthology.orchestra.run.vm01.stdout:(24/136): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 723 kB/s | 85 kB 00:00 2026-03-10T06:59:32.319 INFO:teuthology.orchestra.run.vm08.stdout:(60/136): librdkafka-1.6.1-102.el9.x86_64.rpm 2.3 MB/s | 662 kB 00:00 2026-03-10T06:59:32.333 INFO:teuthology.orchestra.run.vm08.stdout:(61/136): libxslt-1.1.34-12.el9.x86_64.rpm 823 kB/s | 233 kB 00:00 2026-03-10T06:59:32.408 INFO:teuthology.orchestra.run.vm08.stdout:(62/136): openblas-0.3.29-1.el9.x86_64.rpm 564 kB/s | 42 kB 00:00 2026-03-10T06:59:32.416 INFO:teuthology.orchestra.run.vm01.stdout:(25/136): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.3 MB/s | 171 kB 00:00 2026-03-10T06:59:32.478 INFO:teuthology.orchestra.run.vm08.stdout:(63/136): lua-5.4.4-4.el9.x86_64.rpm 1.2 MB/s | 188 kB 00:00 2026-03-10T06:59:32.514 INFO:teuthology.orchestra.run.vm08.stdout:(64/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 983 kB/s | 292 kB 00:00 2026-03-10T06:59:32.534 INFO:teuthology.orchestra.run.vm01.stdout:(26/136): ceph-grafana-dashboards-19.2.3-678.ge 265 kB/s | 31 kB 00:00 2026-03-10T06:59:32.641 INFO:teuthology.orchestra.run.vm01.stdout:(27/136): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 8.7 MB/s | 3.1 MB 00:00 2026-03-10T06:59:32.652 INFO:teuthology.orchestra.run.vm01.stdout:(28/136): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 1.2 MB/s | 150 kB 00:00 2026-03-10T06:59:32.939 INFO:teuthology.orchestra.run.vm08.stdout:(65/136): python3-babel-2.9.1-2.el9.noarch.rpm 14 MB/s | 6.0 MB 00:00 2026-03-10T06:59:33.117 INFO:teuthology.orchestra.run.vm01.stdout:(29/136): ceph-mgr-dashboard-19.2.3-678.ge911bd 8.0 MB/s | 3.8 MB 00:00 2026-03-10T06:59:33.177 INFO:teuthology.orchestra.run.vm08.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 1.0 MB/s | 244 kB 00:00 2026-03-10T06:59:33.234 INFO:teuthology.orchestra.run.vm01.stdout:(30/136): ceph-mgr-modules-core-19.2.3-678.ge91 2.1 MB/s | 253 kB 00:00 2026-03-10T06:59:33.349 INFO:teuthology.orchestra.run.vm01.stdout:(31/136): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 430 kB/s | 49 kB 00:00 2026-03-10T06:59:33.463 INFO:teuthology.orchestra.run.vm01.stdout:(32/136): ceph-prometheus-alerts-19.2.3-678.ge9 147 kB/s | 17 kB 00:00 2026-03-10T06:59:33.506 INFO:teuthology.orchestra.run.vm08.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 755 kB/s | 249 kB 00:00 2026-03-10T06:59:33.581 INFO:teuthology.orchestra.run.vm01.stdout:(33/136): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.5 MB/s | 299 kB 00:00 2026-03-10T06:59:33.613 INFO:teuthology.orchestra.run.vm01.stdout:(34/136): ceph-mgr-diskprediction-local-19.2.3- 7.7 MB/s | 7.4 MB 00:00 2026-03-10T06:59:33.643 INFO:teuthology.orchestra.run.vm08.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 349 kB/s | 48 kB 00:00 2026-03-10T06:59:33.668 INFO:teuthology.orchestra.run.vm01.stdout:(35/136): cryptsetup-2.8.1-3.el9.x86_64.rpm 6.3 MB/s | 351 kB 00:00 2026-03-10T06:59:33.682 INFO:teuthology.orchestra.run.vm01.stdout:(36/136): ledmon-libs-1.1.0-3.el9.x86_64.rpm 2.9 MB/s | 40 kB 00:00 2026-03-10T06:59:33.692 INFO:teuthology.orchestra.run.vm01.stdout:(37/136): libconfig-1.7.2-9.el9.x86_64.rpm 7.4 MB/s | 72 kB 00:00 2026-03-10T06:59:33.704 INFO:teuthology.orchestra.run.vm01.stdout:(38/136): cephadm-19.2.3-678.ge911bdeb.el9.noar 6.1 MB/s | 769 kB 00:00 2026-03-10T06:59:33.720 INFO:teuthology.orchestra.run.vm08.stdout:(69/136): protobuf-3.14.0-17.el9.x86_64.rpm 828 kB/s | 1.0 MB 00:01 2026-03-10T06:59:33.736 INFO:teuthology.orchestra.run.vm01.stdout:(39/136): libquadmath-11.5.0-14.el9.x86_64.rpm 5.6 MB/s | 184 kB 00:00 2026-03-10T06:59:33.742 INFO:teuthology.orchestra.run.vm01.stdout:(40/136): mailcap-2.1.49-5.el9.noarch.rpm 5.5 MB/s | 33 kB 00:00 2026-03-10T06:59:33.744 INFO:teuthology.orchestra.run.vm01.stdout:(41/136): libgfortran-11.5.0-14.el9.x86_64.rpm 15 MB/s | 794 kB 00:00 2026-03-10T06:59:33.754 INFO:teuthology.orchestra.run.vm01.stdout:(42/136): pciutils-3.7.0-7.el9.x86_64.rpm 7.7 MB/s | 93 kB 00:00 2026-03-10T06:59:33.778 INFO:teuthology.orchestra.run.vm01.stdout:(43/136): python3-cffi-1.14.5-5.el9.x86_64.rpm 7.5 MB/s | 253 kB 00:00 2026-03-10T06:59:33.813 INFO:teuthology.orchestra.run.vm01.stdout:(44/136): python3-cryptography-36.0.1-5.el9.x86 21 MB/s | 1.2 MB 00:00 2026-03-10T06:59:33.815 INFO:teuthology.orchestra.run.vm01.stdout:(45/136): python3-ply-3.11-14.el9.noarch.rpm 2.8 MB/s | 106 kB 00:00 2026-03-10T06:59:33.824 INFO:teuthology.orchestra.run.vm08.stdout:(70/136): python3-mako-1.1.4-6.el9.noarch.rpm 1.6 MB/s | 172 kB 00:00 2026-03-10T06:59:33.835 INFO:teuthology.orchestra.run.vm01.stdout:(46/136): python3-pycparser-2.20-6.el9.noarch.r 6.0 MB/s | 135 kB 00:00 2026-03-10T06:59:33.842 INFO:teuthology.orchestra.run.vm01.stdout:(47/136): python3-urllib3-1.26.5-7.el9.noarch.r 30 MB/s | 218 kB 00:00 2026-03-10T06:59:33.871 INFO:teuthology.orchestra.run.vm01.stdout:(48/136): unzip-6.0-59.el9.x86_64.rpm 6.3 MB/s | 182 kB 00:00 2026-03-10T06:59:33.874 INFO:teuthology.orchestra.run.vm01.stdout:(49/136): python3-requests-2.25.1-10.el9.noarch 2.1 MB/s | 126 kB 00:00 2026-03-10T06:59:33.877 INFO:teuthology.orchestra.run.vm01.stdout:(50/136): zip-3.0-35.el9.x86_64.rpm 41 MB/s | 266 kB 00:00 2026-03-10T06:59:33.962 INFO:teuthology.orchestra.run.vm01.stdout:(51/136): flexiblas-3.0.4-9.el9.x86_64.rpm 350 kB/s | 30 kB 00:00 2026-03-10T06:59:33.994 INFO:teuthology.orchestra.run.vm08.stdout:(71/136): python3-markupsafe-1.1.1-12.el9.x86_6 205 kB/s | 35 kB 00:00 2026-03-10T06:59:34.050 INFO:teuthology.orchestra.run.vm08.stdout:(72/136): openblas-openmp-0.3.29-1.el9.x86_64.r 3.2 MB/s | 5.3 MB 00:01 2026-03-10T06:59:34.055 INFO:teuthology.orchestra.run.vm01.stdout:(52/136): boost-program-options-1.75.0-13.el9.x 579 kB/s | 104 kB 00:00 2026-03-10T06:59:34.097 INFO:teuthology.orchestra.run.vm01.stdout:(53/136): flexiblas-openblas-openmp-3.0.4-9.el9 355 kB/s | 15 kB 00:00 2026-03-10T06:59:34.154 INFO:teuthology.orchestra.run.vm08.stdout:(73/136): python3-libstoragemgmt-1.10.1-1.el9.x 346 kB/s | 177 kB 00:00 2026-03-10T06:59:34.154 INFO:teuthology.orchestra.run.vm01.stdout:(54/136): libnbd-1.20.3-4.el9.x86_64.rpm 2.8 MB/s | 164 kB 00:00 2026-03-10T06:59:34.185 INFO:teuthology.orchestra.run.vm01.stdout:(55/136): libpmemobj-1.12.1-1.el9.x86_64.rpm 5.2 MB/s | 160 kB 00:00 2026-03-10T06:59:34.214 INFO:teuthology.orchestra.run.vm01.stdout:(56/136): librabbitmq-0.11.0-7.el9.x86_64.rpm 1.5 MB/s | 45 kB 00:00 2026-03-10T06:59:34.276 INFO:teuthology.orchestra.run.vm01.stdout:(57/136): librdkafka-1.6.1-102.el9.x86_64.rpm 11 MB/s | 662 kB 00:00 2026-03-10T06:59:34.308 INFO:teuthology.orchestra.run.vm01.stdout:(58/136): libstoragemgmt-1.10.1-1.el9.x86_64.rp 7.6 MB/s | 246 kB 00:00 2026-03-10T06:59:34.340 INFO:teuthology.orchestra.run.vm01.stdout:(59/136): libxslt-1.1.34-12.el9.x86_64.rpm 7.2 MB/s | 233 kB 00:00 2026-03-10T06:59:34.371 INFO:teuthology.orchestra.run.vm01.stdout:(60/136): lttng-ust-2.12.0-6.el9.x86_64.rpm 9.0 MB/s | 292 kB 00:00 2026-03-10T06:59:34.402 INFO:teuthology.orchestra.run.vm01.stdout:(61/136): lua-5.4.4-4.el9.x86_64.rpm 6.0 MB/s | 188 kB 00:00 2026-03-10T06:59:34.431 INFO:teuthology.orchestra.run.vm01.stdout:(62/136): openblas-0.3.29-1.el9.x86_64.rpm 1.4 MB/s | 42 kB 00:00 2026-03-10T06:59:34.540 INFO:teuthology.orchestra.run.vm08.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 200 kB/s | 77 kB 00:00 2026-03-10T06:59:34.569 INFO:teuthology.orchestra.run.vm01.stdout:(63/136): flexiblas-netlib-3.0.4-9.el9.x86_64.r 4.9 MB/s | 3.0 MB 00:00 2026-03-10T06:59:34.755 INFO:teuthology.orchestra.run.vm08.stdout:(75/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 8.1 MB/s | 6.1 MB 00:00 2026-03-10T06:59:34.812 INFO:teuthology.orchestra.run.vm01.stdout:(64/136): protobuf-3.14.0-17.el9.x86_64.rpm 4.1 MB/s | 1.0 MB 00:00 2026-03-10T06:59:34.868 INFO:teuthology.orchestra.run.vm08.stdout:(76/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 540 kB/s | 442 kB 00:00 2026-03-10T06:59:34.870 INFO:teuthology.orchestra.run.vm08.stdout:(77/136): python3-protobuf-3.14.0-17.el9.noarch 812 kB/s | 267 kB 00:00 2026-03-10T06:59:34.890 INFO:teuthology.orchestra.run.vm01.stdout:(65/136): openblas-openmp-0.3.29-1.el9.x86_64.r 12 MB/s | 5.3 MB 00:00 2026-03-10T06:59:34.924 INFO:teuthology.orchestra.run.vm01.stdout:(66/136): python3-devel-3.9.25-3.el9.x86_64.rpm 7.1 MB/s | 244 kB 00:00 2026-03-10T06:59:34.956 INFO:teuthology.orchestra.run.vm01.stdout:(67/136): python3-jinja2-2.11.3-8.el9.noarch.rp 7.7 MB/s | 249 kB 00:00 2026-03-10T06:59:34.965 INFO:teuthology.orchestra.run.vm08.stdout:(78/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 751 kB/s | 157 kB 00:00 2026-03-10T06:59:34.986 INFO:teuthology.orchestra.run.vm01.stdout:(68/136): python3-jmespath-1.0.1-1.el9.noarch.r 1.5 MB/s | 48 kB 00:00 2026-03-10T06:59:35.007 INFO:teuthology.orchestra.run.vm01.stdout:(69/136): python3-babel-2.9.1-2.el9.noarch.rpm 31 MB/s | 6.0 MB 00:00 2026-03-10T06:59:35.013 INFO:teuthology.orchestra.run.vm08.stdout:(79/136): python3-requests-oauthlib-1.3.0-12.el 374 kB/s | 54 kB 00:00 2026-03-10T06:59:35.017 INFO:teuthology.orchestra.run.vm01.stdout:(70/136): python3-libstoragemgmt-1.10.1-1.el9.x 5.6 MB/s | 177 kB 00:00 2026-03-10T06:59:35.038 INFO:teuthology.orchestra.run.vm01.stdout:(71/136): python3-mako-1.1.4-6.el9.noarch.rpm 5.5 MB/s | 172 kB 00:00 2026-03-10T06:59:35.047 INFO:teuthology.orchestra.run.vm01.stdout:(72/136): python3-markupsafe-1.1.1-12.el9.x86_6 1.2 MB/s | 35 kB 00:00 2026-03-10T06:59:35.082 INFO:teuthology.orchestra.run.vm01.stdout:(73/136): python3-numpy-f2py-1.23.5-2.el9.x86_6 12 MB/s | 442 kB 00:00 2026-03-10T06:59:35.112 INFO:teuthology.orchestra.run.vm01.stdout:(74/136): python3-packaging-20.9-5.el9.noarch.r 2.5 MB/s | 77 kB 00:00 2026-03-10T06:59:35.144 INFO:teuthology.orchestra.run.vm01.stdout:(75/136): python3-protobuf-3.14.0-17.el9.noarch 8.2 MB/s | 267 kB 00:00 2026-03-10T06:59:35.147 INFO:teuthology.orchestra.run.vm08.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 312 kB/s | 42 kB 00:00 2026-03-10T06:59:35.175 INFO:teuthology.orchestra.run.vm01.stdout:(76/136): python3-pyasn1-0.4.8-7.el9.noarch.rpm 5.1 MB/s | 157 kB 00:00 2026-03-10T06:59:35.201 INFO:teuthology.orchestra.run.vm08.stdout:(81/136): python3-pyasn1-modules-0.4.8-7.el9.no 835 kB/s | 277 kB 00:00 2026-03-10T06:59:35.207 INFO:teuthology.orchestra.run.vm01.stdout:(77/136): python3-pyasn1-modules-0.4.8-7.el9.no 8.5 MB/s | 277 kB 00:00 2026-03-10T06:59:35.237 INFO:teuthology.orchestra.run.vm01.stdout:(78/136): python3-requests-oauthlib-1.3.0-12.el 1.8 MB/s | 54 kB 00:00 2026-03-10T06:59:35.256 INFO:teuthology.orchestra.run.vm01.stdout:(79/136): python3-numpy-1.23.5-2.el9.x86_64.rpm 28 MB/s | 6.1 MB 00:00 2026-03-10T06:59:35.263 INFO:teuthology.orchestra.run.vm08.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 592 kB/s | 37 kB 00:00 2026-03-10T06:59:35.285 INFO:teuthology.orchestra.run.vm01.stdout:(80/136): python3-toml-0.10.2-6.el9.noarch.rpm 1.4 MB/s | 42 kB 00:00 2026-03-10T06:59:35.317 INFO:teuthology.orchestra.run.vm01.stdout:(81/136): qatlib-25.08.0-2.el9.x86_64.rpm 7.5 MB/s | 240 kB 00:00 2026-03-10T06:59:35.346 INFO:teuthology.orchestra.run.vm01.stdout:(82/136): qatlib-service-25.08.0-2.el9.x86_64.r 1.3 MB/s | 37 kB 00:00 2026-03-10T06:59:35.375 INFO:teuthology.orchestra.run.vm01.stdout:(83/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 2.2 MB/s | 66 kB 00:00 2026-03-10T06:59:35.408 INFO:teuthology.orchestra.run.vm01.stdout:(84/136): socat-1.7.4.1-8.el9.x86_64.rpm 9.0 MB/s | 303 kB 00:00 2026-03-10T06:59:35.437 INFO:teuthology.orchestra.run.vm01.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 2.1 MB/s | 64 kB 00:00 2026-03-10T06:59:35.455 INFO:teuthology.orchestra.run.vm08.stdout:(83/136): qatlib-25.08.0-2.el9.x86_64.rpm 779 kB/s | 240 kB 00:00 2026-03-10T06:59:35.494 INFO:teuthology.orchestra.run.vm08.stdout:(84/136): qatzip-libs-1.3.1-1.el9.x86_64.rpm 288 kB/s | 66 kB 00:00 2026-03-10T06:59:35.638 INFO:teuthology.orchestra.run.vm01.stdout:(86/136): lua-devel-5.4.4-4.el9.x86_64.rpm 111 kB/s | 22 kB 00:00 2026-03-10T06:59:35.654 INFO:teuthology.orchestra.run.vm08.stdout:(85/136): xmlstarlet-1.6.1-20.el9.x86_64.rpm 398 kB/s | 64 kB 00:00 2026-03-10T06:59:35.685 INFO:teuthology.orchestra.run.vm08.stdout:(86/136): socat-1.7.4.1-8.el9.x86_64.rpm 1.3 MB/s | 303 kB 00:00 2026-03-10T06:59:35.892 INFO:teuthology.orchestra.run.vm08.stdout:(87/136): lua-devel-5.4.4-4.el9.x86_64.rpm 94 kB/s | 22 kB 00:00 2026-03-10T06:59:35.951 INFO:teuthology.orchestra.run.vm01.stdout:(87/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 27 MB/s | 19 MB 00:00 2026-03-10T06:59:35.993 INFO:teuthology.orchestra.run.vm01.stdout:(88/136): protobuf-compiler-3.14.0-17.el9.x86_6 2.4 MB/s | 862 kB 00:00 2026-03-10T06:59:36.093 INFO:teuthology.orchestra.run.vm08.stdout:(88/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 2.7 MB/s | 551 kB 00:00 2026-03-10T06:59:36.125 INFO:teuthology.orchestra.run.vm08.stdout:(89/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 9.4 MB/s | 308 kB 00:00 2026-03-10T06:59:36.148 INFO:teuthology.orchestra.run.vm08.stdout:(90/136): grpc-data-1.46.7-10.el9.noarch.rpm 838 kB/s | 19 kB 00:00 2026-03-10T06:59:36.152 INFO:teuthology.orchestra.run.vm08.stdout:(91/136): protobuf-compiler-3.14.0-17.el9.x86_6 1.8 MB/s | 862 kB 00:00 2026-03-10T06:59:36.227 INFO:teuthology.orchestra.run.vm08.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 336 kB/s | 25 kB 00:00 2026-03-10T06:59:36.290 INFO:teuthology.orchestra.run.vm08.stdout:(93/136): libarrow-9.0.0-15.el9.x86_64.rpm 31 MB/s | 4.4 MB 00:00 2026-03-10T06:59:36.290 INFO:teuthology.orchestra.run.vm08.stdout:(94/136): liboath-2.6.12-1.el9.x86_64.rpm 769 kB/s | 49 kB 00:00 2026-03-10T06:59:36.315 INFO:teuthology.orchestra.run.vm08.stdout:(95/136): libunwind-1.6.2-1.el9.x86_64.rpm 2.6 MB/s | 67 kB 00:00 2026-03-10T06:59:36.352 INFO:teuthology.orchestra.run.vm08.stdout:(96/136): luarocks-3.9.2-5.el9.noarch.rpm 2.4 MB/s | 151 kB 00:00 2026-03-10T06:59:36.372 INFO:teuthology.orchestra.run.vm01.stdout:(89/136): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.3 MB/s | 551 kB 00:00 2026-03-10T06:59:36.374 INFO:teuthology.orchestra.run.vm01.stdout:(90/136): gperftools-libs-2.9.1-3.el9.x86_64.rp 807 kB/s | 308 kB 00:00 2026-03-10T06:59:36.413 INFO:teuthology.orchestra.run.vm08.stdout:(97/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 8.4 MB/s | 838 kB 00:00 2026-03-10T06:59:36.419 INFO:teuthology.orchestra.run.vm01.stdout:(91/136): grpc-data-1.46.7-10.el9.noarch.rpm 417 kB/s | 19 kB 00:00 2026-03-10T06:59:36.420 INFO:teuthology.orchestra.run.vm08.stdout:(98/136): python3-asyncssh-2.13.2-5.el9.noarch. 7.8 MB/s | 548 kB 00:00 2026-03-10T06:59:36.437 INFO:teuthology.orchestra.run.vm08.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 1.2 MB/s | 29 kB 00:00 2026-03-10T06:59:36.445 INFO:teuthology.orchestra.run.vm08.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 2.5 MB/s | 60 kB 00:00 2026-03-10T06:59:36.467 INFO:teuthology.orchestra.run.vm01.stdout:(92/136): libarrow-doc-9.0.0-15.el9.noarch.rpm 521 kB/s | 25 kB 00:00 2026-03-10T06:59:36.467 INFO:teuthology.orchestra.run.vm08.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 1.4 MB/s | 43 kB 00:00 2026-03-10T06:59:36.469 INFO:teuthology.orchestra.run.vm08.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 1.3 MB/s | 32 kB 00:00 2026-03-10T06:59:36.492 INFO:teuthology.orchestra.run.vm08.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 587 kB/s | 14 kB 00:00 2026-03-10T06:59:36.509 INFO:teuthology.orchestra.run.vm08.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 4.3 MB/s | 173 kB 00:00 2026-03-10T06:59:36.519 INFO:teuthology.orchestra.run.vm01.stdout:(93/136): liboath-2.6.12-1.el9.x86_64.rpm 947 kB/s | 49 kB 00:00 2026-03-10T06:59:36.521 INFO:teuthology.orchestra.run.vm08.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 12 MB/s | 358 kB 00:00 2026-03-10T06:59:36.539 INFO:teuthology.orchestra.run.vm08.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 8.2 MB/s | 254 kB 00:00 2026-03-10T06:59:36.568 INFO:teuthology.orchestra.run.vm08.stdout:(107/136): python3-grpcio-tools-1.46.7-10.el9.x 5.0 MB/s | 144 kB 00:00 2026-03-10T06:59:36.569 INFO:teuthology.orchestra.run.vm01.stdout:(94/136): libunwind-1.6.2-1.el9.x86_64.rpm 1.3 MB/s | 67 kB 00:00 2026-03-10T06:59:36.591 INFO:teuthology.orchestra.run.vm08.stdout:(108/136): python3-jaraco-8.2.1-3.el9.noarch.rp 463 kB/s | 11 kB 00:00 2026-03-10T06:59:36.615 INFO:teuthology.orchestra.run.vm08.stdout:(109/136): python3-jaraco-classes-3.2.1-5.el9.n 761 kB/s | 18 kB 00:00 2026-03-10T06:59:36.630 INFO:teuthology.orchestra.run.vm01.stdout:(95/136): luarocks-3.9.2-5.el9.noarch.rpm 2.5 MB/s | 151 kB 00:00 2026-03-10T06:59:36.644 INFO:teuthology.orchestra.run.vm08.stdout:(110/136): python3-grpcio-1.46.7-10.el9.x86_64. 17 MB/s | 2.0 MB 00:00 2026-03-10T06:59:36.646 INFO:teuthology.orchestra.run.vm08.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 734 kB/s | 23 kB 00:00 2026-03-10T06:59:36.664 INFO:teuthology.orchestra.run.vm01.stdout:(96/136): libarrow-9.0.0-15.el9.x86_64.rpm 15 MB/s | 4.4 MB 00:00 2026-03-10T06:59:36.671 INFO:teuthology.orchestra.run.vm08.stdout:(112/136): python3-jaraco-functools-3.5.0-2.el9 791 kB/s | 19 kB 00:00 2026-03-10T06:59:36.673 INFO:teuthology.orchestra.run.vm08.stdout:(113/136): python3-jaraco-context-6.0.1-3.el9.n 680 kB/s | 20 kB 00:00 2026-03-10T06:59:36.695 INFO:teuthology.orchestra.run.vm08.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 1.1 MB/s | 26 kB 00:00 2026-03-10T06:59:36.712 INFO:teuthology.orchestra.run.vm08.stdout:(115/136): python3-kubernetes-26.1.0-3.el9.noar 26 MB/s | 1.0 MB 00:00 2026-03-10T06:59:36.724 INFO:teuthology.orchestra.run.vm01.stdout:(97/136): python3-asyncssh-2.13.2-5.el9.noarch. 9.0 MB/s | 548 kB 00:00 2026-03-10T06:59:36.765 INFO:teuthology.orchestra.run.vm01.stdout:(98/136): parquet-libs-9.0.0-15.el9.x86_64.rpm 6.1 MB/s | 838 kB 00:00 2026-03-10T06:59:36.765 INFO:teuthology.orchestra.run.vm08.stdout:(116/136): python3-scipy-1.9.3-2.el9.x86_64.rpm 11 MB/s | 19 MB 00:01 2026-03-10T06:59:36.767 INFO:teuthology.orchestra.run.vm08.stdout:(117/136): python3-logutils-0.3.5-21.el9.noarch 639 kB/s | 46 kB 00:00 2026-03-10T06:59:36.768 INFO:teuthology.orchestra.run.vm08.stdout:(118/136): python3-more-itertools-8.12.0-2.el9. 1.4 MB/s | 79 kB 00:00 2026-03-10T06:59:36.771 INFO:teuthology.orchestra.run.vm01.stdout:(99/136): python3-autocommand-2.2.2-8.el9.noarc 622 kB/s | 29 kB 00:00 2026-03-10T06:59:36.792 INFO:teuthology.orchestra.run.vm08.stdout:(119/136): python3-portend-3.1.0-2.el9.noarch.r 696 kB/s | 16 kB 00:00 2026-03-10T06:59:36.798 INFO:teuthology.orchestra.run.vm08.stdout:(120/136): python3-pecan-1.4.2-3.el9.noarch.rpm 8.9 MB/s | 272 kB 00:00 2026-03-10T06:59:36.813 INFO:teuthology.orchestra.run.vm01.stdout:(100/136): python3-backports-tarfile-1.2.0-1.el 1.2 MB/s | 60 kB 00:00 2026-03-10T06:59:36.816 INFO:teuthology.orchestra.run.vm08.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 3.7 MB/s | 90 kB 00:00 2026-03-10T06:59:36.818 INFO:teuthology.orchestra.run.vm01.stdout:(101/136): python3-bcrypt-3.2.2-1.el9.x86_64.rp 944 kB/s | 43 kB 00:00 2026-03-10T06:59:36.822 INFO:teuthology.orchestra.run.vm08.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 1.3 MB/s | 31 kB 00:00 2026-03-10T06:59:36.842 INFO:teuthology.orchestra.run.vm08.stdout:(123/136): python3-routes-2.5.1-5.el9.noarch.rp 7.4 MB/s | 188 kB 00:00 2026-03-10T06:59:36.847 INFO:teuthology.orchestra.run.vm08.stdout:(124/136): python3-rsa-4.9-2.el9.noarch.rpm 2.4 MB/s | 59 kB 00:00 2026-03-10T06:59:36.858 INFO:teuthology.orchestra.run.vm08.stdout:(125/136): python3-natsort-7.1.1-5.el9.noarch.r 621 kB/s | 58 kB 00:00 2026-03-10T06:59:36.859 INFO:teuthology.orchestra.run.vm01.stdout:(102/136): python3-cachetools-4.2.4-1.el9.noarc 690 kB/s | 32 kB 00:00 2026-03-10T06:59:36.864 INFO:teuthology.orchestra.run.vm01.stdout:(103/136): python3-certifi-2023.05.07-4.el9.noa 302 kB/s | 14 kB 00:00 2026-03-10T06:59:36.865 INFO:teuthology.orchestra.run.vm08.stdout:(126/136): python3-tempora-5.0.0-2.el9.noarch.r 1.5 MB/s | 36 kB 00:00 2026-03-10T06:59:36.872 INFO:teuthology.orchestra.run.vm08.stdout:(127/136): python3-typing-extensions-4.15.0-1.e 3.4 MB/s | 86 kB 00:00 2026-03-10T06:59:36.895 INFO:teuthology.orchestra.run.vm08.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 3.0 MB/s | 90 kB 00:00 2026-03-10T06:59:36.911 INFO:teuthology.orchestra.run.vm01.stdout:(104/136): python3-cheroot-10.0.1-4.el9.noarch. 3.3 MB/s | 173 kB 00:00 2026-03-10T06:59:36.915 INFO:teuthology.orchestra.run.vm01.stdout:(105/136): python3-cherrypy-18.6.1-2.el9.noarch 6.9 MB/s | 358 kB 00:00 2026-03-10T06:59:36.916 INFO:teuthology.orchestra.run.vm08.stdout:(129/136): python3-werkzeug-2.0.3-3.el9.1.noarc 9.6 MB/s | 427 kB 00:00 2026-03-10T06:59:36.918 INFO:teuthology.orchestra.run.vm08.stdout:(130/136): python3-xmltodict-0.12.0-15.el9.noar 973 kB/s | 22 kB 00:00 2026-03-10T06:59:36.933 INFO:teuthology.orchestra.run.vm08.stdout:(131/136): python3-webob-1.8.8-2.el9.noarch.rpm 3.0 MB/s | 230 kB 00:00 2026-03-10T06:59:36.939 INFO:teuthology.orchestra.run.vm08.stdout:(132/136): python3-zc-lockfile-2.0-10.el9.noarc 852 kB/s | 20 kB 00:00 2026-03-10T06:59:36.945 INFO:teuthology.orchestra.run.vm08.stdout:(133/136): re2-20211101-20.el9.x86_64.rpm 7.0 MB/s | 191 kB 00:00 2026-03-10T06:59:36.965 INFO:teuthology.orchestra.run.vm01.stdout:(106/136): python3-google-auth-2.45.0-1.el9.noa 4.6 MB/s | 254 kB 00:00 2026-03-10T06:59:36.989 INFO:teuthology.orchestra.run.vm01.stdout:(107/136): python3-grpcio-1.46.7-10.el9.x86_64. 28 MB/s | 2.0 MB 00:00 2026-03-10T06:59:37.017 INFO:teuthology.orchestra.run.vm01.stdout:(108/136): python3-grpcio-tools-1.46.7-10.el9.x 2.7 MB/s | 144 kB 00:00 2026-03-10T06:59:37.030 INFO:teuthology.orchestra.run.vm08.stdout:(134/136): thrift-0.15.0-4.el9.x86_64.rpm 16 MB/s | 1.6 MB 00:00 2026-03-10T06:59:37.038 INFO:teuthology.orchestra.run.vm01.stdout:(109/136): python3-jaraco-8.2.1-3.el9.noarch.rp 219 kB/s | 11 kB 00:00 2026-03-10T06:59:37.064 INFO:teuthology.orchestra.run.vm01.stdout:(110/136): python3-jaraco-classes-3.2.1-5.el9.n 381 kB/s | 18 kB 00:00 2026-03-10T06:59:37.088 INFO:teuthology.orchestra.run.vm01.stdout:(111/136): python3-jaraco-collections-3.0.0-8.e 470 kB/s | 23 kB 00:00 2026-03-10T06:59:37.112 INFO:teuthology.orchestra.run.vm01.stdout:(112/136): python3-jaraco-context-6.0.1-3.el9.n 411 kB/s | 20 kB 00:00 2026-03-10T06:59:37.135 INFO:teuthology.orchestra.run.vm01.stdout:(113/136): python3-jaraco-functools-3.5.0-2.el9 415 kB/s | 19 kB 00:00 2026-03-10T06:59:37.160 INFO:teuthology.orchestra.run.vm01.stdout:(114/136): python3-jaraco-text-4.0.0-2.el9.noar 555 kB/s | 26 kB 00:00 2026-03-10T06:59:37.199 INFO:teuthology.orchestra.run.vm01.stdout:(115/136): python3-kubernetes-26.1.0-3.el9.noar 16 MB/s | 1.0 MB 00:00 2026-03-10T06:59:37.207 INFO:teuthology.orchestra.run.vm01.stdout:(116/136): python3-logutils-0.3.5-21.el9.noarch 990 kB/s | 46 kB 00:00 2026-03-10T06:59:37.248 INFO:teuthology.orchestra.run.vm01.stdout:(117/136): python3-more-itertools-8.12.0-2.el9. 1.6 MB/s | 79 kB 00:00 2026-03-10T06:59:37.254 INFO:teuthology.orchestra.run.vm01.stdout:(118/136): python3-natsort-7.1.1-5.el9.noarch.r 1.2 MB/s | 58 kB 00:00 2026-03-10T06:59:37.298 INFO:teuthology.orchestra.run.vm01.stdout:(119/136): python3-pecan-1.4.2-3.el9.noarch.rpm 5.3 MB/s | 272 kB 00:00 2026-03-10T06:59:37.301 INFO:teuthology.orchestra.run.vm01.stdout:(120/136): python3-portend-3.1.0-2.el9.noarch.r 351 kB/s | 16 kB 00:00 2026-03-10T06:59:37.347 INFO:teuthology.orchestra.run.vm01.stdout:(121/136): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.8 MB/s | 90 kB 00:00 2026-03-10T06:59:37.347 INFO:teuthology.orchestra.run.vm01.stdout:(122/136): python3-repoze-lru-0.7-16.el9.noarch 664 kB/s | 31 kB 00:00 2026-03-10T06:59:37.396 INFO:teuthology.orchestra.run.vm01.stdout:(123/136): python3-routes-2.5.1-5.el9.noarch.rp 3.8 MB/s | 188 kB 00:00 2026-03-10T06:59:37.396 INFO:teuthology.orchestra.run.vm01.stdout:(124/136): python3-rsa-4.9-2.el9.noarch.rpm 1.2 MB/s | 59 kB 00:00 2026-03-10T06:59:37.442 INFO:teuthology.orchestra.run.vm01.stdout:(125/136): python3-tempora-5.0.0-2.el9.noarch.r 770 kB/s | 36 kB 00:00 2026-03-10T06:59:37.444 INFO:teuthology.orchestra.run.vm01.stdout:(126/136): python3-typing-extensions-4.15.0-1.e 1.8 MB/s | 86 kB 00:00 2026-03-10T06:59:37.493 INFO:teuthology.orchestra.run.vm01.stdout:(127/136): python3-webob-1.8.8-2.el9.noarch.rpm 4.4 MB/s | 230 kB 00:00 2026-03-10T06:59:37.496 INFO:teuthology.orchestra.run.vm01.stdout:(128/136): python3-websocket-client-1.2.3-2.el9 1.7 MB/s | 90 kB 00:00 2026-03-10T06:59:37.545 INFO:teuthology.orchestra.run.vm01.stdout:(129/136): python3-xmltodict-0.12.0-15.el9.noar 453 kB/s | 22 kB 00:00 2026-03-10T06:59:37.547 INFO:teuthology.orchestra.run.vm01.stdout:(130/136): python3-werkzeug-2.0.3-3.el9.1.noarc 7.8 MB/s | 427 kB 00:00 2026-03-10T06:59:37.592 INFO:teuthology.orchestra.run.vm01.stdout:(131/136): python3-zc-lockfile-2.0-10.el9.noarc 429 kB/s | 20 kB 00:00 2026-03-10T06:59:37.595 INFO:teuthology.orchestra.run.vm01.stdout:(132/136): re2-20211101-20.el9.x86_64.rpm 3.9 MB/s | 191 kB 00:00 2026-03-10T06:59:37.685 INFO:teuthology.orchestra.run.vm01.stdout:(133/136): thrift-0.15.0-4.el9.x86_64.rpm 17 MB/s | 1.6 MB 00:00 2026-03-10T06:59:37.972 INFO:teuthology.orchestra.run.vm08.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 3.4 MB 00:01 2026-03-10T06:59:38.008 INFO:teuthology.orchestra.run.vm08.stdout:(136/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.0 MB/s | 3.2 MB 00:01 2026-03-10T06:59:38.012 INFO:teuthology.orchestra.run.vm08.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:59:38.012 INFO:teuthology.orchestra.run.vm08.stdout:Total 16 MB/s | 210 MB 00:13 2026-03-10T06:59:38.628 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T06:59:38.678 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T06:59:38.678 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T06:59:38.745 INFO:teuthology.orchestra.run.vm01.stdout:(134/136): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.0 MB/s | 3.2 MB 00:01 2026-03-10T06:59:38.755 INFO:teuthology.orchestra.run.vm01.stdout:(135/136): librados2-19.2.3-678.ge911bdeb.el9.x 3.0 MB/s | 3.4 MB 00:01 2026-03-10T06:59:39.285 INFO:teuthology.orchestra.run.vm01.stdout:(136/136): ceph-test-19.2.3-678.ge911bdeb.el9.x 5.9 MB/s | 50 MB 00:08 2026-03-10T06:59:39.289 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T06:59:39.289 INFO:teuthology.orchestra.run.vm01.stdout:Total 16 MB/s | 210 MB 00:13 2026-03-10T06:59:39.526 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T06:59:39.526 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T06:59:39.855 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T06:59:39.907 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T06:59:39.907 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T06:59:40.476 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T06:59:40.503 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T06:59:40.516 INFO:teuthology.orchestra.run.vm08.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T06:59:40.697 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T06:59:40.699 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:59:40.760 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T06:59:40.760 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T06:59:40.763 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:59:40.771 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:59:40.802 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:59:40.813 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:59:40.817 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T06:59:40.820 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T06:59:40.826 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T06:59:40.837 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T06:59:40.837 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:59:40.875 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:59:40.877 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:59:40.891 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:59:40.926 INFO:teuthology.orchestra.run.vm08.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T06:59:40.968 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T06:59:40.974 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T06:59:41.002 INFO:teuthology.orchestra.run.vm08.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T06:59:41.017 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T06:59:41.026 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T06:59:41.037 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T06:59:41.045 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T06:59:41.049 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T06:59:41.056 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T06:59:41.086 INFO:teuthology.orchestra.run.vm08.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T06:59:41.104 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T06:59:41.109 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T06:59:41.117 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T06:59:41.119 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T06:59:41.154 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T06:59:41.162 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T06:59:41.174 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T06:59:41.190 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T06:59:41.201 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T06:59:41.232 INFO:teuthology.orchestra.run.vm08.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T06:59:41.238 INFO:teuthology.orchestra.run.vm08.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T06:59:41.248 INFO:teuthology.orchestra.run.vm08.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T06:59:41.279 INFO:teuthology.orchestra.run.vm08.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T06:59:41.341 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T06:59:41.359 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T06:59:41.367 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T06:59:41.377 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T06:59:41.384 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T06:59:41.389 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T06:59:41.406 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T06:59:41.436 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T06:59:41.443 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T06:59:41.450 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T06:59:41.465 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T06:59:41.479 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T06:59:41.491 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T06:59:41.558 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T06:59:41.566 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T06:59:41.577 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T06:59:41.625 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T06:59:41.710 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T06:59:41.725 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/138 2026-03-10T06:59:41.739 INFO:teuthology.orchestra.run.vm01.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/138 2026-03-10T06:59:41.916 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/138 2026-03-10T06:59:41.918 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:59:41.979 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T06:59:41.982 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:59:42.002 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T06:59:42.011 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/138 2026-03-10T06:59:42.019 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T06:59:42.021 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T06:59:42.026 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/138 2026-03-10T06:59:42.027 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T06:59:42.028 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/138 2026-03-10T06:59:42.033 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/138 2026-03-10T06:59:42.036 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T06:59:42.041 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T06:59:42.044 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/138 2026-03-10T06:59:42.045 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:59:42.050 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T06:59:42.055 INFO:teuthology.orchestra.run.vm08.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T06:59:42.058 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T06:59:42.084 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T06:59:42.085 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:59:42.090 INFO:teuthology.orchestra.run.vm08.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T06:59:42.100 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/138 2026-03-10T06:59:42.136 INFO:teuthology.orchestra.run.vm01.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/138 2026-03-10T06:59:42.144 INFO:teuthology.orchestra.run.vm08.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T06:59:42.158 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T06:59:42.167 INFO:teuthology.orchestra.run.vm08.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T06:59:42.173 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T06:59:42.177 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/138 2026-03-10T06:59:42.182 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T06:59:42.183 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/138 2026-03-10T06:59:42.188 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T06:59:42.198 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T06:59:42.205 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T06:59:42.210 INFO:teuthology.orchestra.run.vm01.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/138 2026-03-10T06:59:42.225 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/138 2026-03-10T06:59:42.234 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-packaging-20.9-5.el9.noarch 18/138 2026-03-10T06:59:42.240 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T06:59:42.245 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 19/138 2026-03-10T06:59:42.254 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 20/138 2026-03-10T06:59:42.255 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T06:59:42.259 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-5.4.4-4.el9.x86_64 21/138 2026-03-10T06:59:42.266 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 22/138 2026-03-10T06:59:42.298 INFO:teuthology.orchestra.run.vm01.stdout: Installing : unzip-6.0-59.el9.x86_64 23/138 2026-03-10T06:59:42.302 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T06:59:42.316 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 24/138 2026-03-10T06:59:42.324 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 25/138 2026-03-10T06:59:42.333 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 26/138 2026-03-10T06:59:42.336 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 27/138 2026-03-10T06:59:42.375 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 28/138 2026-03-10T06:59:42.382 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 29/138 2026-03-10T06:59:42.395 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 30/138 2026-03-10T06:59:42.412 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 31/138 2026-03-10T06:59:42.421 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/138 2026-03-10T06:59:42.452 INFO:teuthology.orchestra.run.vm01.stdout: Installing : zip-3.0-35.el9.x86_64 33/138 2026-03-10T06:59:42.460 INFO:teuthology.orchestra.run.vm01.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/138 2026-03-10T06:59:42.470 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/138 2026-03-10T06:59:42.508 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/138 2026-03-10T06:59:42.576 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 37/138 2026-03-10T06:59:42.593 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T06:59:42.595 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 38/138 2026-03-10T06:59:42.609 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rsa-4.9-2.el9.noarch 39/138 2026-03-10T06:59:42.620 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/138 2026-03-10T06:59:42.627 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T06:59:42.628 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 41/138 2026-03-10T06:59:42.635 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T06:59:42.636 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/138 2026-03-10T06:59:42.656 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/138 2026-03-10T06:59:42.686 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/138 2026-03-10T06:59:42.694 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 45/138 2026-03-10T06:59:42.702 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T06:59:42.703 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 46/138 2026-03-10T06:59:42.706 INFO:teuthology.orchestra.run.vm08.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T06:59:42.719 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 47/138 2026-03-10T06:59:42.733 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T06:59:42.734 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 48/138 2026-03-10T06:59:42.747 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 49/138 2026-03-10T06:59:42.816 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 50/138 2026-03-10T06:59:42.825 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 51/138 2026-03-10T06:59:42.836 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 52/138 2026-03-10T06:59:42.887 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 53/138 2026-03-10T06:59:43.144 INFO:teuthology.orchestra.run.vm08.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T06:59:43.246 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T06:59:43.305 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 54/138 2026-03-10T06:59:43.325 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 55/138 2026-03-10T06:59:43.331 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 56/138 2026-03-10T06:59:43.340 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 57/138 2026-03-10T06:59:43.351 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 58/138 2026-03-10T06:59:43.360 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 59/138 2026-03-10T06:59:43.366 INFO:teuthology.orchestra.run.vm01.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 60/138 2026-03-10T06:59:43.368 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 61/138 2026-03-10T06:59:43.403 INFO:teuthology.orchestra.run.vm01.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 62/138 2026-03-10T06:59:43.462 INFO:teuthology.orchestra.run.vm01.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 63/138 2026-03-10T06:59:43.478 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 64/138 2026-03-10T06:59:43.487 INFO:teuthology.orchestra.run.vm01.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 65/138 2026-03-10T06:59:43.494 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 66/138 2026-03-10T06:59:43.503 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 67/138 2026-03-10T06:59:43.509 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 68/138 2026-03-10T06:59:43.520 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 69/138 2026-03-10T06:59:43.526 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 70/138 2026-03-10T06:59:43.564 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 71/138 2026-03-10T06:59:43.578 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 72/138 2026-03-10T06:59:43.628 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 73/138 2026-03-10T06:59:43.934 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 74/138 2026-03-10T06:59:43.970 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 75/138 2026-03-10T06:59:43.980 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 76/138 2026-03-10T06:59:44.049 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/138 2026-03-10T06:59:44.053 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/138 2026-03-10T06:59:44.059 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T06:59:44.081 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/138 2026-03-10T06:59:44.090 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T06:59:44.097 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T06:59:44.102 INFO:teuthology.orchestra.run.vm08.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T06:59:44.262 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T06:59:44.266 INFO:teuthology.orchestra.run.vm08.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:59:44.300 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:59:44.304 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T06:59:44.313 INFO:teuthology.orchestra.run.vm08.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T06:59:44.507 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/138 2026-03-10T06:59:44.573 INFO:teuthology.orchestra.run.vm08.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T06:59:44.576 INFO:teuthology.orchestra.run.vm08.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:59:44.595 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:59:44.597 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T06:59:44.615 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/138 2026-03-10T06:59:45.491 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/138 2026-03-10T06:59:45.528 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/138 2026-03-10T06:59:45.536 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/138 2026-03-10T06:59:45.541 INFO:teuthology.orchestra.run.vm01.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/138 2026-03-10T06:59:45.716 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 86/138 2026-03-10T06:59:45.726 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:59:45.736 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:59:45.742 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:59:45.763 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 87/138 2026-03-10T06:59:45.765 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:59:45.767 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 88/138 2026-03-10T06:59:45.775 INFO:teuthology.orchestra.run.vm01.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 89/138 2026-03-10T06:59:45.783 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T06:59:45.803 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T06:59:45.892 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T06:59:45.907 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T06:59:45.935 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T06:59:45.973 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T06:59:46.037 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T06:59:46.048 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T06:59:46.055 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:59:46.060 INFO:teuthology.orchestra.run.vm01.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 90/138 2026-03-10T06:59:46.063 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:59:46.063 INFO:teuthology.orchestra.run.vm08.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T06:59:46.067 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T06:59:46.070 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:59:46.083 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 91/138 2026-03-10T06:59:46.085 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 92/138 2026-03-10T06:59:46.086 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:59:46.399 INFO:teuthology.orchestra.run.vm08.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T06:59:46.459 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:59:46.503 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:59:46.503 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T06:59:46.503 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T06:59:46.503 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:46.508 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:59:47.309 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:59:47.421 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:59:47.445 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 93/138 2026-03-10T06:59:47.483 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ply-3.11-14.el9.noarch 94/138 2026-03-10T06:59:47.512 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 95/138 2026-03-10T06:59:47.640 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 96/138 2026-03-10T06:59:47.655 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 97/138 2026-03-10T06:59:47.691 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 98/138 2026-03-10T06:59:47.822 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 99/138 2026-03-10T06:59:47.968 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 100/138 2026-03-10T06:59:48.113 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 101/138 2026-03-10T06:59:48.119 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T06:59:48.126 INFO:teuthology.orchestra.run.vm01.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 103/138 2026-03-10T06:59:48.133 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 104/138 2026-03-10T06:59:48.135 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:59:48.158 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 105/138 2026-03-10T06:59:48.504 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 106/138 2026-03-10T06:59:48.511 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:59:48.554 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 107/138 2026-03-10T06:59:48.554 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-10T06:59:48.554 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-10T06:59:48.554 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:48.564 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T06:59:53.265 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:53.397 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:59:53.421 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:59:53.421 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:53.421 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:59:53.421 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:59:53.421 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:59:53.421 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:53.667 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:59:53.690 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:59:53.690 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:53.690 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:59:53.690 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:59:53.690 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:59:53.690 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:53.699 INFO:teuthology.orchestra.run.vm08.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T06:59:53.702 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T06:59:53.722 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:59:53.722 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'qat' with GID 994. 2026-03-10T06:59:53.722 INFO:teuthology.orchestra.run.vm08.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T06:59:53.722 INFO:teuthology.orchestra.run.vm08.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T06:59:53.722 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:53.735 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:59:53.765 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:59:53.765 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T06:59:53.765 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:53.808 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T06:59:53.902 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T06:59:53.908 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:59:53.926 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:59:53.926 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:53.926 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:59:53.926 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:54.829 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:59:54.858 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:59:54.858 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:54.858 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:59:54.858 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:59:54.858 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:59:54.858 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:54.921 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:59:54.925 INFO:teuthology.orchestra.run.vm08.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:59:54.932 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T06:59:54.956 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T06:59:54.959 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:59:55.341 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 108/138 2026-03-10T06:59:55.341 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-10T06:59:55.342 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:55.472 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:59:55.497 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 109/138 2026-03-10T06:59:55.497 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:55.497 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T06:59:55.497 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:59:55.497 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-10T06:59:55.497 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:55.541 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:59:55.548 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:59:55.732 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:59:55.757 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 110/138 2026-03-10T06:59:55.757 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:55.757 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T06:59:55.757 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:59:55.757 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-10T06:59:55.757 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:55.765 INFO:teuthology.orchestra.run.vm01.stdout: Installing : mailcap-2.1.49-5.el9.noarch 111/138 2026-03-10T06:59:55.768 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 112/138 2026-03-10T06:59:55.786 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:59:55.786 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'qat' with GID 994. 2026-03-10T06:59:55.786 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-10T06:59:55.786 INFO:teuthology.orchestra.run.vm01.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-10T06:59:55.786 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:55.797 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:59:55.827 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 113/138 2026-03-10T06:59:55.827 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-10T06:59:55.827 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:55.872 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 114/138 2026-03-10T06:59:55.962 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/138 2026-03-10T06:59:55.980 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:59:55.995 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 116/138 2026-03-10T06:59:55.995 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:55.995 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T06:59:55.995 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:56.102 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:59:56.104 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:59:56.171 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:59:56.233 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T06:59:56.236 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:59:56.259 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:59:56.259 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:56.259 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:59:56.259 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:59:56.259 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:59:56.259 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:56.279 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:59:56.293 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:59:56.798 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:59:56.822 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 117/138 2026-03-10T06:59:56.822 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:56.822 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T06:59:56.822 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:59:56.822 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-10T06:59:56.822 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:56.832 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T06:59:56.836 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:59:56.860 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:59:56.860 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:56.860 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:59:56.860 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:59:56.860 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:59:56.860 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:56.872 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:59:56.885 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:59:56.888 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 118/138 2026-03-10T06:59:56.895 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 119/138 2026-03-10T06:59:56.896 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:59:56.896 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:56.896 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:59:56.896 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:56.920 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 120/138 2026-03-10T06:59:56.923 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:59:57.056 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:59:57.079 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:59:57.079 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:57.079 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:59:57.079 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:59:57.079 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:59:57.079 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T06:59:57.476 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 121/138 2026-03-10T06:59:57.483 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:59:58.038 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 122/138 2026-03-10T06:59:58.041 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:59:58.106 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 123/138 2026-03-10T06:59:58.165 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 124/138 2026-03-10T06:59:58.168 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:59:58.190 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 125/138 2026-03-10T06:59:58.190 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:58.190 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T06:59:58.190 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:59:58.190 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-10T06:59:58.190 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:58.206 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:59:58.220 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 126/138 2026-03-10T06:59:58.731 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 127/138 2026-03-10T06:59:58.735 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:59:58.758 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 128/138 2026-03-10T06:59:58.758 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:58.758 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T06:59:58.758 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:59:58.758 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-10T06:59:58.758 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:58.769 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:59:58.789 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 129/138 2026-03-10T06:59:58.789 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:58.789 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T06:59:58.789 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:58.955 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:59:58.978 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 130/138 2026-03-10T06:59:58.978 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T06:59:58.978 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T06:59:58.978 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:59:58.978 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-10T06:59:58.978 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T06:59:59.742 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T06:59:59.753 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T06:59:59.759 INFO:teuthology.orchestra.run.vm08.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T06:59:59.815 INFO:teuthology.orchestra.run.vm08.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T06:59:59.825 INFO:teuthology.orchestra.run.vm08.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T06:59:59.829 INFO:teuthology.orchestra.run.vm08.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T06:59:59.829 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:59:59.847 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T06:59:59.847 INFO:teuthology.orchestra.run.vm08.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T07:00:01.258 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T07:00:01.259 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T07:00:01.260 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T07:00:01.261 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T07:00:01.262 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T07:00:01.263 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T07:00:01.264 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout:Upgraded: 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout:Installed: 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.376 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T07:00:01.377 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T07:00:01.378 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:00:01.379 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:00:01.475 DEBUG:teuthology.parallel:result is None 2026-03-10T07:00:01.704 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 131/138 2026-03-10T07:00:01.715 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 132/138 2026-03-10T07:00:01.720 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 133/138 2026-03-10T07:00:01.776 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 134/138 2026-03-10T07:00:01.787 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T07:00:01.791 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 136/138 2026-03-10T07:00:01.791 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T07:00:01.809 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 137/138 2026-03-10T07:00:01.809 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/138 2026-03-10T07:00:03.130 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/138 2026-03-10T07:00:03.131 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 48/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 49/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 50/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 51/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 52/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 53/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 54/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 55/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 56/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 57/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 58/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 59/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 60/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 61/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 62/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 63/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 64/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 65/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 66/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 67/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 68/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 69/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 70/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 71/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 72/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 73/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 74/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 75/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 76/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 77/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 78/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 79/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 80/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 81/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 82/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 83/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 84/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 85/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 86/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 87/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 88/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 89/138 2026-03-10T07:00:03.133 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 90/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 91/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 92/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 93/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 94/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 95/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 96/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 97/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 98/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 99/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 100/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 101/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 102/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 103/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 104/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 105/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 106/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 107/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 108/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 109/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 110/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 111/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 112/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 113/138 2026-03-10T07:00:03.134 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 114/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 115/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 116/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 117/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 118/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 119/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 120/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 121/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 122/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 123/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 124/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 125/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 126/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 127/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 128/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 129/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 130/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 131/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 132/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 133/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 134/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 135/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 136/138 2026-03-10T07:00:03.135 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 137/138 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 138/138 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout:Upgraded: 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T07:00:03.240 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T07:00:03.241 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-10T07:00:03.242 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T07:00:03.243 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T07:00:03.243 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T07:00:03.243 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-10T07:00:03.243 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:00:03.243 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:00:03.336 DEBUG:teuthology.parallel:result is None 2026-03-10T07:00:03.336 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:00:04.006 DEBUG:teuthology.orchestra.run.vm01:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T07:00:04.026 INFO:teuthology.orchestra.run.vm01.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T07:00:04.027 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T07:00:04.027 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T07:00:04.028 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:00:04.678 DEBUG:teuthology.orchestra.run.vm08:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-10T07:00:04.704 INFO:teuthology.orchestra.run.vm08.stdout:19.2.3-678.ge911bdeb.el9 2026-03-10T07:00:04.704 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-10T07:00:04.704 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-10T07:00:04.705 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T07:00:04.705 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:00:04.705 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T07:00:04.735 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:00:04.735 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T07:00:04.773 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T07:00:04.773 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:00:04.773 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T07:00:04.804 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T07:00:04.873 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:00:04.873 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T07:00:04.899 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T07:00:04.968 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T07:00:04.968 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:00:04.968 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T07:00:04.997 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T07:00:05.065 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:00:05.065 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T07:00:05.091 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T07:00:05.155 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T07:00:05.155 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:00:05.155 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T07:00:05.181 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T07:00:05.248 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:00:05.248 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T07:00:05.276 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T07:00:05.343 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T07:00:05.395 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': True}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'use-ca-signed-key': True} 2026-03-10T07:00:05.395 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:00:05.395 INFO:tasks.cephadm:Cluster fsid is c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:00:05.395 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T07:00:05.395 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.101', 'mon.b': '192.168.123.108'} 2026-03-10T07:00:05.395 INFO:tasks.cephadm:First mon is mon.a on vm01 2026-03-10T07:00:05.395 INFO:tasks.cephadm:First mgr is a 2026-03-10T07:00:05.395 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T07:00:05.395 DEBUG:teuthology.orchestra.run.vm01:> sudo hostname $(hostname -s) 2026-03-10T07:00:05.423 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-10T07:00:05.447 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T07:00:05.447 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:00:06.064 INFO:tasks.cephadm:builder_project result: [{'url': 'https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'chacra_url': 'https://3.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'centos', 'distro_version': '9', 'distro_codename': None, 'modified': '2026-02-25 18:55:15.146628', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['source', 'x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678.ge911bdeb', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.26+soko16', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T07:00:06.631 INFO:tasks.util.chacra:got chacra host 3.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=centos%2F9%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:00:06.632 INFO:tasks.cephadm:Discovered cachra url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T07:00:06.632 INFO:tasks.cephadm:Downloading cephadm from url: https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm 2026-03-10T07:00:06.632 DEBUG:teuthology.orchestra.run.vm01:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T07:00:08.053 INFO:teuthology.orchestra.run.vm01.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 07:00 /home/ubuntu/cephtest/cephadm 2026-03-10T07:00:08.054 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://3.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T07:00:09.468 INFO:teuthology.orchestra.run.vm08.stdout:-rw-r--r--. 1 ubuntu ubuntu 788355 Mar 10 07:00 /home/ubuntu/cephtest/cephadm 2026-03-10T07:00:09.469 DEBUG:teuthology.orchestra.run.vm01:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T07:00:09.484 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T07:00:09.506 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T07:00:09.506 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T07:00:09.526 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T07:00:09.687 INFO:teuthology.orchestra.run.vm01.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:00:09.715 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-10T07:00:58.894 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout: "repo_digests": [ 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout: ] 2026-03-10T07:01:22.943 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-10T07:01:22.962 DEBUG:teuthology.orchestra.run.vm01:> sudo ssh-keygen -t rsa -f /root/ca-key -N '' 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:Generating public/private rsa key pair. 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:Your identification has been saved in /root/ca-key 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:Your public key has been saved in /root/ca-key.pub 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:The key fingerprint is: 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:SHA256:tJaPqpJr+xX1a6u6tsSiZA266rhVaAXoS4B90ls6cA4 root@vm01 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:The key's randomart image is: 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:+---[RSA 3072]----+ 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|.o.. | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|+ E.+ . | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|o B.+ o | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:| o o= o + | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|. = ... S . | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:| + + . o o . | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|. +.o + . + | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|.=+. +.. . . | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:|B++=oo=+... | 2026-03-10T07:01:23.182 INFO:teuthology.orchestra.run.vm01.stdout:+----[SHA256]-----+ 2026-03-10T07:01:23.184 DEBUG:teuthology.orchestra.run.vm01:> sudo cat /root/ca-key.pub 2026-03-10T07:01:23.207 INFO:teuthology.orchestra.run.vm01.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.208 DEBUG:teuthology.orchestra.run.vm01:> sudo echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.208 DEBUG:teuthology.orchestra.run.vm01:> ' | sudo tee -a /etc/ssh/ca-key.pub 2026-03-10T07:01:23.278 INFO:teuthology.orchestra.run.vm01.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.278 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:23.280 DEBUG:teuthology.orchestra.run.vm01:> sudo echo 'TrustedUserCAKeys /etc/ssh/ca-key.pub' | sudo tee -a /etc/ssh/sshd_config && sudo systemctl restart sshd 2026-03-10T07:01:23.346 INFO:teuthology.orchestra.run.vm01.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T07:01:23.402 DEBUG:teuthology.orchestra.run.vm08:> sudo echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.402 DEBUG:teuthology.orchestra.run.vm08:> ' | sudo tee -a /etc/ssh/ca-key.pub 2026-03-10T07:01:23.429 INFO:teuthology.orchestra.run.vm08.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.429 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:01:23.431 DEBUG:teuthology.orchestra.run.vm08:> sudo echo 'TrustedUserCAKeys /etc/ssh/ca-key.pub' | sudo tee -a /etc/ssh/sshd_config && sudo systemctl restart sshd 2026-03-10T07:01:23.493 INFO:teuthology.orchestra.run.vm08.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T07:01:23.526 DEBUG:teuthology.orchestra.run.vm01:> sudo ssh-keygen -t rsa -f /root/cephadm-ssh-key -N '' && sudo ssh-keygen -s /root/ca-key -I user_root -n root -V +52w /root/cephadm-ssh-key 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:Generating public/private rsa key pair. 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:Your identification has been saved in /root/cephadm-ssh-key 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:Your public key has been saved in /root/cephadm-ssh-key.pub 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:The key fingerprint is: 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:SHA256:fMdo2NJZ9CvFSmjuD+ri/s0zgB3k7TdL0Oxafd29ykE root@vm01 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:The key's randomart image is: 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:+---[RSA 3072]----+ 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| . | 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| . o o | 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| o .ooo + | 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| .o*o*oo . | 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| oSoOo=E. +| 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| . o=..B.. =| 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| .o= + ..| 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| . ++oo .. | 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:| oo++ oo.o. | 2026-03-10T07:01:23.650 INFO:teuthology.orchestra.run.vm01.stdout:+----[SHA256]-----+ 2026-03-10T07:01:23.664 INFO:teuthology.orchestra.run.vm01.stderr:Signed user key /root/cephadm-ssh-key-cert.pub: id "user_root" serial 0 for root valid from 2026-03-10T07:00:00 to 2027-03-09T07:01:23 2026-03-10T07:01:23.665 DEBUG:teuthology.orchestra.run.vm01:> sudo cat /etc/ssh/ca-key.pub 2026-03-10T07:01:23.687 INFO:teuthology.orchestra.run.vm01.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.687 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:23.688 DEBUG:teuthology.orchestra.run.vm01:> sudo cat /etc/ssh/sshd_config | grep TrustedUserCAKeys 2026-03-10T07:01:23.753 INFO:teuthology.orchestra.run.vm01.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T07:01:23.753 DEBUG:teuthology.orchestra.run.vm08:> sudo cat /etc/ssh/ca-key.pub 2026-03-10T07:01:23.783 INFO:teuthology.orchestra.run.vm08.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGAK/eb97sGS/27XFHYGJ3Wz3wRgrchDG0sR7k872CZk3TYzVTrS6DAaWgVuwwAhaViHBm19USx0yPxOCmYnKCn3cB+/bukDaiyDNbRBxLdeqSK/zgY0Geb9clibchlj8nnfrva2LwRKYSjMmM8cg9HNFEB2Ox6nvnTwjGmypmcwDQqSuiCwYE2fd0xvL4o296jQxaKeouzyJn+CHMcpa5o2LAPqPBLqJ+jN8284xjrn//HP6nWgRGrl5TNldYj4BWw0G5WJ3nyHNbewMQXP5a1JHCukid2CEq9ldfMIXruAo8RWXU+8F6viZ2n9aNNdvA3ao+FPwX3ebJdFgS2l050ZbvLNOxTwGp3Qi/rsJEhyJWsUPhB4H+mrt24suVI7CE9rhCn3YaNdQxWnOg4ztbYuyBRfGSjWA8kMHTUdaXFkiNUTqOsQHAzrGagFxyakKAXNSfSgQDiIPvn7xOr/41U4bU/gisvFrhuCbO3JeGtT/qxuKM+I09dL4fD3ioK2s= root@vm01 2026-03-10T07:01:23.783 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:01:23.785 DEBUG:teuthology.orchestra.run.vm08:> sudo cat /etc/ssh/sshd_config | grep TrustedUserCAKeys 2026-03-10T07:01:23.853 INFO:teuthology.orchestra.run.vm08.stdout:TrustedUserCAKeys /etc/ssh/ca-key.pub 2026-03-10T07:01:23.853 DEBUG:teuthology.orchestra.run.vm01:> sudo ls /root/ 2026-03-10T07:01:23.879 INFO:teuthology.orchestra.run.vm01.stdout:ca-key 2026-03-10T07:01:23.879 INFO:teuthology.orchestra.run.vm01.stdout:ca-key.pub 2026-03-10T07:01:23.879 INFO:teuthology.orchestra.run.vm01.stdout:cephadm-ssh-key 2026-03-10T07:01:23.879 INFO:teuthology.orchestra.run.vm01.stdout:cephadm-ssh-key-cert.pub 2026-03-10T07:01:23.879 INFO:teuthology.orchestra.run.vm01.stdout:cephadm-ssh-key.pub 2026-03-10T07:01:23.879 INFO:teuthology.orchestra.run.vm01.stdout:original-ks.cfg 2026-03-10T07:01:23.880 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph 2026-03-10T07:01:23.946 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-10T07:01:23.978 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /etc/ceph 2026-03-10T07:01:24.012 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-10T07:01:24.048 INFO:tasks.cephadm:Writing seed config... 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = True 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T07:01:24.049 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T07:01:24.049 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:01:24.049 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T07:01:24.070 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = c470e80c-1c4e-11f1-89aa-7f5873752d90 mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = True [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T07:01:24.070 DEBUG:teuthology.orchestra.run.vm01:mon.a> sudo journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a.service 2026-03-10T07:01:24.112 DEBUG:teuthology.orchestra.run.vm01:mgr.a> sudo journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service 2026-03-10T07:01:24.153 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T07:01:24.153 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --ssh-private-key /root/cephadm-ssh-key --ssh-signed-cert /root/cephadm-ssh-key-cert.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.101 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:01:24.310 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-10T07:01:24.310 INFO:teuthology.orchestra.run.vm01.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'c470e80c-1c4e-11f1-89aa-7f5873752d90', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--ssh-private-key', '/root/cephadm-ssh-key', '--ssh-signed-cert', '/root/cephadm-ssh-key-cert.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.101', '--skip-admin-label'] 2026-03-10T07:01:24.310 INFO:teuthology.orchestra.run.vm01.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T07:01:24.310 INFO:teuthology.orchestra.run.vm01.stdout:Verifying podman|docker is present... 2026-03-10T07:01:24.338 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T07:01:24.338 INFO:teuthology.orchestra.run.vm01.stdout:Verifying lvm2 is present... 2026-03-10T07:01:24.338 INFO:teuthology.orchestra.run.vm01.stdout:Verifying time synchronization is in place... 2026-03-10T07:01:24.346 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T07:01:24.346 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T07:01:24.352 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T07:01:24.352 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T07:01:24.359 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T07:01:24.366 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T07:01:24.366 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T07:01:24.366 INFO:teuthology.orchestra.run.vm01.stdout:Repeating the final host check... 2026-03-10T07:01:24.387 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 5.8.0 2026-03-10T07:01:24.387 INFO:teuthology.orchestra.run.vm01.stdout:podman (/bin/podman) version 5.8.0 is present 2026-03-10T07:01:24.387 INFO:teuthology.orchestra.run.vm01.stdout:systemctl is present 2026-03-10T07:01:24.387 INFO:teuthology.orchestra.run.vm01.stdout:lvcreate is present 2026-03-10T07:01:24.393 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T07:01:24.394 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T07:01:24.399 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T07:01:24.399 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout inactive 2026-03-10T07:01:24.406 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout enabled 2026-03-10T07:01:24.412 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stdout active 2026-03-10T07:01:24.412 INFO:teuthology.orchestra.run.vm01.stdout:Unit chronyd.service is enabled and running 2026-03-10T07:01:24.412 INFO:teuthology.orchestra.run.vm01.stdout:Host looks OK 2026-03-10T07:01:24.412 INFO:teuthology.orchestra.run.vm01.stdout:Cluster fsid: c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:24.412 INFO:teuthology.orchestra.run.vm01.stdout:Acquiring lock 139718572935632 on /run/cephadm/c470e80c-1c4e-11f1-89aa-7f5873752d90.lock 2026-03-10T07:01:24.412 INFO:teuthology.orchestra.run.vm01.stdout:Lock 139718572935632 acquired on /run/cephadm/c470e80c-1c4e-11f1-89aa-7f5873752d90.lock 2026-03-10T07:01:24.413 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 3300 ... 2026-03-10T07:01:24.413 INFO:teuthology.orchestra.run.vm01.stdout:Verifying IP 192.168.123.101 port 6789 ... 2026-03-10T07:01:24.413 INFO:teuthology.orchestra.run.vm01.stdout:Base mon IP(s) is [192.168.123.101:3300, 192.168.123.101:6789], mon addrv is [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T07:01:24.417 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout default via 192.168.123.1 dev eth0 proto dhcp src 192.168.123.101 metric 100 2026-03-10T07:01:24.417 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 192.168.123.0/24 dev eth0 proto kernel scope link src 192.168.123.101 metric 100 2026-03-10T07:01:24.420 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T07:01:24.420 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout fe80::/64 dev eth0 proto kernel metric 1024 pref medium 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout 2: eth0: mtu 1500 state UP qlen 1000 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout inet6 fe80::5055:ff:fe00:1/64 scope link noprefixroute 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:Mon IP `192.168.123.101` is in CIDR network `192.168.123.0/24` 2026-03-10T07:01:24.423 INFO:teuthology.orchestra.run.vm01.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24'] 2026-03-10T07:01:24.424 INFO:teuthology.orchestra.run.vm01.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T07:01:24.424 INFO:teuthology.orchestra.run.vm01.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stdout 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Trying to pull quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Getting image source signatures 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:1752b8d01aa0dd33bbe0ab24e8316174c94fbdcd5d26252e2680bba0624747a7 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying blob sha256:8e380faede39ebd4286247457b408d979ab568aafd8389c42ec304b8cfba4e92 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Copying config sha256:654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c 2026-03-10T07:01:25.751 INFO:teuthology.orchestra.run.vm01.stdout:/bin/podman: stderr Writing manifest to image destination 2026-03-10T07:01:25.911 INFO:teuthology.orchestra.run.vm01.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T07:01:25.911 INFO:teuthology.orchestra.run.vm01.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T07:01:25.911 INFO:teuthology.orchestra.run.vm01.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T07:01:26.019 INFO:teuthology.orchestra.run.vm01.stdout:stat: stdout 167 167 2026-03-10T07:01:26.019 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial keys... 2026-03-10T07:01:26.126 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQDGwa9p/j0hBRAAdWEeF5BTnqvv0HasXJ1s/w== 2026-03-10T07:01:26.240 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQDGwa9pX/sGDBAAd7O17+WkyyVSgYOWqZrh5A== 2026-03-10T07:01:26.355 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph-authtool: stdout AQDGwa9p5WXNEhAADY8JnGqTuxjqYRV0XDLb2Q== 2026-03-10T07:01:26.355 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial monmap... 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool for a [v2:192.168.123.101:3300,v1:192.168.123.101:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = quincy 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: set fsid to c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:26.482 INFO:teuthology.orchestra.run.vm01.stdout:Creating mon... 2026-03-10T07:01:26.718 INFO:teuthology.orchestra.run.vm01.stdout:create mon.a on 2026-03-10T07:01:27.081 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-10T07:01:27.272 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T07:01:27.418 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90.target → /etc/systemd/system/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90.target. 2026-03-10T07:01:27.418 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90.target → /etc/systemd/system/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90.target. 2026-03-10T07:01:27.576 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a 2026-03-10T07:01:27.576 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a.service: Unit ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a.service not loaded. 2026-03-10T07:01:27.745 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90.target.wants/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a.service → /etc/systemd/system/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@.service. 2026-03-10T07:01:28.256 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:28 vm01 podman[50399]: 2026-03-10 07:01:28.163785805 +0000 UTC m=+0.289913690 container start 97b7a0adb2405aca806231be7db38bb69c260d36b017382e322c8039940fed9d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T07:01:28.256 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:28 vm01 bash[50399]: 97b7a0adb2405aca806231be7db38bb69c260d36b017382e322c8039940fed9d 2026-03-10T07:01:28.447 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T07:01:28.448 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T07:01:28.448 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon to start... 2026-03-10T07:01:28.448 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mon... 2026-03-10T07:01:28.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:28 vm01 systemd[1]: Started Ceph mon.a for c470e80c-1c4e-11f1-89aa-7f5873752d90. 2026-03-10T07:01:28.781 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:28 vm01 ceph-mon[50413]: mkfs c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:28.781 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:28 vm01 ceph-mon[50413]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:01:28.783 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout id: c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout services: 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.174097s) 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout data: 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:mon is available 2026-03-10T07:01:28.784 INFO:teuthology.orchestra.run.vm01.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T07:01:28.978 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:28.978 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = True 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T07:01:28.979 INFO:teuthology.orchestra.run.vm01.stdout:Generating new minimal ceph.conf... 2026-03-10T07:01:29.186 INFO:teuthology.orchestra.run.vm01.stdout:Restarting the monitor... 2026-03-10T07:01:29.291 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 systemd[1]: Stopping Ceph mon.a for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:01:29.291 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a[50409]: 2026-03-10T07:01:29.269+0000 7f299dcca640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T07:01:29.292 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a[50409]: 2026-03-10T07:01:29.269+0000 7f299dcca640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T07:01:29.520 INFO:teuthology.orchestra.run.vm01.stdout:Setting public_network to 192.168.123.0/24 in mon config section 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 podman[50616]: 2026-03-10 07:01:29.292495226 +0000 UTC m=+0.037442518 container died 97b7a0adb2405aca806231be7db38bb69c260d36b017382e322c8039940fed9d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 podman[50616]: 2026-03-10 07:01:29.306968973 +0000 UTC m=+0.051916265 container remove 97b7a0adb2405aca806231be7db38bb69c260d36b017382e322c8039940fed9d (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 bash[50616]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 systemd[1]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a.service: Deactivated successfully. 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 systemd[1]: Stopped Ceph mon.a for c470e80c-1c4e-11f1-89aa-7f5873752d90. 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 systemd[1]: Starting Ceph mon.a for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 podman[50681]: 2026-03-10 07:01:29.471295443 +0000 UTC m=+0.018325480 container create 43de93e2c1cd8b77628d3fba5645d4c907d3ca75a9c647eeccef37420ccd4a25 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 podman[50681]: 2026-03-10 07:01:29.508657189 +0000 UTC m=+0.055687236 container init 43de93e2c1cd8b77628d3fba5645d4c907d3ca75a9c647eeccef37420ccd4a25 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 podman[50681]: 2026-03-10 07:01:29.511847142 +0000 UTC m=+0.058877179 container start 43de93e2c1cd8b77628d3fba5645d4c907d3ca75a9c647eeccef37420ccd4a25 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 bash[50681]: 43de93e2c1cd8b77628d3fba5645d4c907d3ca75a9c647eeccef37420ccd4a25 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 podman[50681]: 2026-03-10 07:01:29.464567165 +0000 UTC m=+0.011597212 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 systemd[1]: Started Ceph mon.a for c470e80c-1c4e-11f1-89aa-7f5873752d90. 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: set uid:gid to 167:167 (ceph:ceph) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 2 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: pidfile_write: ignore empty --pid-file 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: load: jerasure load: lrc 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: RocksDB version: 7.9.2 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Git sha 0 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: DB SUMMARY 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: DB Session ID: 9FWUROSXT4QNQSBIISXH 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: CURRENT file: CURRENT 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: IDENTITY file: IDENTITY 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75535 ; 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.error_if_exists: 0 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.create_if_missing: 0 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.paranoid_checks: 1 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.env: 0x564387b5bdc0 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.fs: PosixFileSystem 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.info_log: 0x564388752700 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_file_opening_threads: 16 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.statistics: (nil) 2026-03-10T07:01:29.555 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.use_fsync: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_log_file_size: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.keep_log_file_num: 1000 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.recycle_log_file_num: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.allow_fallocate: 1 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.allow_mmap_reads: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.allow_mmap_writes: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.use_direct_reads: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.create_missing_column_families: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.db_log_dir: 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.wal_dir: 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.advise_random_on_open: 1 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.db_write_buffer_size: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.write_buffer_manager: 0x564388757900 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.rate_limiter: (nil) 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.wal_recovery_mode: 2 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.enable_thread_tracking: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.enable_pipelined_write: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.unordered_write: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.row_cache: None 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.wal_filter: None 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.allow_ingest_behind: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.two_write_queues: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.manual_wal_flush: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.wal_compression: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.atomic_flush: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T07:01:29.556 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.log_readahead_size: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.best_efforts_recovery: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.allow_data_in_errors: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.db_host_id: __hostname__ 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_background_jobs: 2 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_background_compactions: -1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_subcompactions: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_total_wal_size: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_open_files: -1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bytes_per_sync: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_readahead_size: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_background_flushes: -1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Compression algorithms supported: 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kZSTD supported: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kXpressCompression supported: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kBZip2Compression supported: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kLZ4Compression supported: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kZlibCompression supported: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kLZ4HCCompression supported: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: kSnappyCompression supported: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.merge_operator: 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_filter: None 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_filter_factory: None 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.sst_partitioner_factory: None 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564388752640) 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: cache_index_and_filter_blocks: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: pin_top_level_index_and_filter: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: index_type: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: data_block_index_type: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: index_shortening: 1 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: data_block_hash_table_util_ratio: 0.750000 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: checksum: 4 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: no_block_cache: 0 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache: 0x564388777350 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_name: BinnedLRUCache 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_options: 2026-03-10T07:01:29.557 INFO:journalctl@ceph.mon.a.vm01.stdout: capacity : 536870912 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: num_shard_bits : 4 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: strict_capacity_limit : 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: high_pri_pool_ratio: 0.000 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: block_cache_compressed: (nil) 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: persistent_cache: (nil) 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: block_size: 4096 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: block_size_deviation: 10 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: block_restart_interval: 16 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: index_block_restart_interval: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: metadata_block_size: 4096 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: partition_filters: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: use_delta_encoding: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: filter_policy: bloomfilter 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: whole_key_filtering: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: verify_compression: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: read_amp_bytes_per_bit: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: format_version: 5 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: enable_index_compression: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: block_align: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: max_auto_readahead_size: 262144 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: prepopulate_block_cache: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: initial_auto_readahead_size: 8192 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout: num_file_reads_for_auto_readahead: 2 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.write_buffer_size: 33554432 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_write_buffer_number: 2 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression: NoCompression 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression: Disabled 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.prefix_extractor: nullptr 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.num_levels: 7 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.level: 32767 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.strategy: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.enabled: false 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.target_file_size_base: 67108864 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T07:01:29.558 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.arena_block_size: 1048576 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.disable_auto_compactions: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.inplace_update_support: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.bloom_locality: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.max_successive_merges: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.paranoid_file_checks: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.force_consistency_checks: 1 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.report_bg_io_stats: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.ttl: 2592000 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.enable_blob_files: false 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.min_blob_size: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.blob_file_size: 268435456 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.blob_file_starting_level: 0 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f3b6b320-5057-4e0f-aef4-ea6ab26f8db9 2026-03-10T07:01:29.559 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773126089532741, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773126089534125, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70895, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65374, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773126089, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f3b6b320-5057-4e0f-aef4-ea6ab26f8db9", "db_session_id": "9FWUROSXT4QNQSBIISXH", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: EVENT_LOG_v1 {"time_micros": 1773126089534212, "job": 1, "event": "recovery_finished"} 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564388778e00 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: DB pointer 0x56438888e000 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: ** DB Stats ** 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: ** Compaction Stats [default] ** 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: L0 2/0 72.77 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 58.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Sum 2/0 72.77 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 58.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 58.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: ** Compaction Stats [default] ** 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 58.2 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: AddFile(Keys): cumulative 0, interval 0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Cumulative compaction: 0.00 GB write, 15.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Interval compaction: 0.00 GB write, 15.85 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Block cache BinnedLRUCache@0x564388777350#2 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: ** File Read Latency Histogram By Level [default] ** 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: starting mon.a rank 0 at public addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] at bind addrs [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: mon.a@-1(???) e1 preinit fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: mon.a@-1(???).mds e1 new map 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: mon.a@-1(???).mds e1 print_map 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: e1 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: btime 2026-03-10T07:01:28:543209+0000 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: legacy client fscid: -1 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: 2026-03-10T07:01:29.560 INFO:journalctl@ceph.mon.a.vm01.stdout: No filesystems configured 2026-03-10T07:01:29.754 INFO:teuthology.orchestra.run.vm01.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T07:01:29.755 INFO:teuthology.orchestra.run.vm01.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:01:29.755 INFO:teuthology.orchestra.run.vm01.stdout:Creating mgr... 2026-03-10T07:01:29.755 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T07:01:29.756 INFO:teuthology.orchestra.run.vm01.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: monmap epoch 1 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: last_changed 2026-03-10T07:01:26.440497+0000 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: created 2026-03-10T07:01:26.440497+0000 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: min_mon_release 19 (squid) 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: election_strategy: 1 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: fsmap 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: osdmap e1: 0 total, 0 up, 0 in 2026-03-10T07:01:29.828 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:29 vm01 ceph-mon[50695]: mgrmap e1: no daemons active 2026-03-10T07:01:29.917 INFO:teuthology.orchestra.run.vm01.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a 2026-03-10T07:01:29.917 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Failed to reset failed state of unit ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service: Unit ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service not loaded. 2026-03-10T07:01:30.056 INFO:teuthology.orchestra.run.vm01.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90.target.wants/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service → /etc/systemd/system/ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@.service. 2026-03-10T07:01:30.189 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 systemd[1]: Starting Ceph mgr.a for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:01:30.250 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T07:01:30.250 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T07:01:30.250 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T07:01:30.250 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T07:01:30.250 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr to start... 2026-03-10T07:01:30.250 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr... 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 podman[50895]: 2026-03-10 07:01:30.189807983 +0000 UTC m=+0.020229208 container create 9e1d6e4d2b806b70ac439410075661d43e951ac25f0f4e584d08414cfc6a3fc3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, CEPH_REF=squid) 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 podman[50895]: 2026-03-10 07:01:30.240374885 +0000 UTC m=+0.070796131 container init 9e1d6e4d2b806b70ac439410075661d43e951ac25f0f4e584d08414cfc6a3fc3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 podman[50895]: 2026-03-10 07:01:30.243150063 +0000 UTC m=+0.073571298 container start 9e1d6e4d2b806b70ac439410075661d43e951ac25f0f4e584d08414cfc6a3fc3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True) 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 bash[50895]: 9e1d6e4d2b806b70ac439410075661d43e951ac25f0f4e584d08414cfc6a3fc3 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 podman[50895]: 2026-03-10 07:01:30.181063857 +0000 UTC m=+0.011485092 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 systemd[1]: Started Ceph mgr.a for c470e80c-1c4e-11f1-89aa-7f5873752d90. 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:30.370+0000 7fbc48844140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:01:30.446 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:30.424+0000 7fbc48844140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:01:30.486 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "c470e80c-1c4e-11f1-89aa-7f5873752d90", 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:01:28:543209+0000", 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:01:30.488 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:01:28.544038+0000", 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:30.489 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (1/15)... 2026-03-10T07:01:31.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:30 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1690608054' entity='client.admin' 2026-03-10T07:01:31.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:30 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2058262095' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:01:31.028 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:30 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:30.907+0000 7fbc48844140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:01:31.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:31.261+0000 7fbc48844140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:01:31.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:01:31.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:01:31.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: from numpy import show_config as show_numpy_config 2026-03-10T07:01:31.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:31.352+0000 7fbc48844140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:01:31.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:31.390+0000 7fbc48844140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:01:31.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:31 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:31.462+0000 7fbc48844140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:01:32.278 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.005+0000 7fbc48844140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:01:32.278 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.125+0000 7fbc48844140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:01:32.278 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.169+0000 7fbc48844140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:01:32.278 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.206+0000 7fbc48844140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:01:32.278 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.252+0000 7fbc48844140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:01:32.610 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.294+0000 7fbc48844140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:01:32.610 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.488+0000 7fbc48844140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:01:32.610 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.543+0000 7fbc48844140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:01:32.940 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:32 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:32.792+0000 7fbc48844140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:01:33.210 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.119+0000 7fbc48844140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:01:33.210 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.163+0000 7fbc48844140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:01:33.210 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.209+0000 7fbc48844140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:01:33.519 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.302+0000 7fbc48844140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:01:33.519 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.343+0000 7fbc48844140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:01:33.519 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.435+0000 7fbc48844140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "c470e80c-1c4e-11f1-89aa-7f5873752d90", 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:01:33.677 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:01:28:543209+0000", 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:01:28.544038+0000", 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:33.678 INFO:teuthology.orchestra.run.vm01.stdout:mgr not available, waiting (2/15)... 2026-03-10T07:01:33.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/645197636' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:01:33.779 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.581+0000 7fbc48844140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:01:33.779 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.746+0000 7fbc48844140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:01:34.278 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:33 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:33.787+0000 7fbc48844140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: Activating manager daemon a 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: mgrmap e2: a(active, starting, since 0.00496295s) 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: Manager daemon a is now available 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' 2026-03-10T07:01:35.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:34 vm01 ceph-mon[50695]: from='mgr.14100 192.168.123.101:0/4197316970' entity='mgr.a' 2026-03-10T07:01:35.974 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:35 vm01 ceph-mon[50695]: mgrmap e3: a(active, since 1.01047s) 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsid": "c470e80c-1c4e-11f1-89aa-7f5873752d90", 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 0 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "quorum_age": 6, 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.993 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T07:01:28:543209+0000", 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout ], 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T07:01:28.544038+0000", 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout }, 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:35.994 INFO:teuthology.orchestra.run.vm01.stdout:mgr is available 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout fsid = c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.101:3300,v1:192.168.123.101:6789] 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T07:01:36.259 INFO:teuthology.orchestra.run.vm01.stdout:Enabling cephadm module... 2026-03-10T07:01:37.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:36 vm01 ceph-mon[50695]: mgrmap e4: a(active, since 2s) 2026-03-10T07:01:37.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:36 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/186873678' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T07:01:37.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:36 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1087488457' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T07:01:37.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:36 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1087488457' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T07:01:37.243 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:36 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1308908718' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T07:01:37.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:37 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: ignoring --setuser ceph since I am not root 2026-03-10T07:01:37.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:37 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: ignoring --setgroup ceph since I am not root 2026-03-10T07:01:37.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:37 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:37.375+0000 7f475e26f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:01:37.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:37 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:37.429+0000 7f475e26f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:01:37.596 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:37.596 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-10T07:01:37.596 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:01:37.596 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T07:01:37.596 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T07:01:37.597 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:37.597 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T07:01:37.597 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 5... 2026-03-10T07:01:38.227 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:37 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:37.910+0000 7f475e26f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1308908718' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-mon[50695]: mgrmap e5: a(active, since 3s) 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/4133589518' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:38.256+0000 7f475e26f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: from numpy import show_config as show_numpy_config 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:38.346+0000 7f475e26f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:01:38.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:38.385+0000 7f475e26f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:01:38.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:38 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:38.460+0000 7f475e26f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:01:39.261 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.008+0000 7f475e26f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:01:39.262 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.133+0000 7f475e26f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:01:39.262 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.180+0000 7f475e26f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:01:39.262 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.218+0000 7f475e26f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:01:39.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.261+0000 7f475e26f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:01:39.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.301+0000 7f475e26f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:01:39.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.482+0000 7f475e26f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:01:40.028 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.539+0000 7f475e26f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:01:40.028 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:39 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:39.763+0000 7f475e26f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:01:40.324 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.048+0000 7f475e26f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:01:40.325 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.087+0000 7f475e26f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:01:40.325 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.131+0000 7f475e26f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:01:40.325 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.209+0000 7f475e26f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:01:40.325 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.246+0000 7f475e26f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:01:40.584 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.324+0000 7f475e26f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:01:40.584 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.440+0000 7f475e26f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:01:40.902 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.584+0000 7f475e26f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:01:40.902 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:40.624+0000 7f475e26f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: Active manager daemon a restarted 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: Activating manager daemon a 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: osdmap e2: 0 total, 0 up, 0 in 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: mgrmap e6: a(active, starting, since 0.229079s) 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: Manager daemon a is now available 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:41.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:40 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:41.951 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:41.951 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-10T07:01:41.951 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T07:01:41.951 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:41.951 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 5 is available 2026-03-10T07:01:41.951 INFO:teuthology.orchestra.run.vm01.stdout:Setting orchestrator backend to cephadm... 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: Found migration_current of "None". Setting to last migration. 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:42.220 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:41 vm01 ceph-mon[50695]: mgrmap e7: a(active, since 1.23052s) 2026-03-10T07:01:42.504 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T07:01:42.504 INFO:teuthology.orchestra.run.vm01.stdout:Using provided ssh private key and signed cert ... 2026-03-10T07:01:43.099 INFO:teuthology.orchestra.run.vm01.stdout:Adding host vm01... 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:42] ENGINE Bus STARTING 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:42] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:42] ENGINE Client ('192.168.123.101', 49348) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:42] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:42] ENGINE Bus STARTED 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:43.428 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:43 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: Set ssh ssh_identity_key 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: Set ssh private key 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: Set ssh ssh_identity_cert 2026-03-10T07:01:44.400 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:44 vm01 ceph-mon[50695]: mgrmap e8: a(active, since 2s) 2026-03-10T07:01:45.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:45 vm01 ceph-mon[50695]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm01", "addr": "192.168.123.101", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:45.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:45 vm01 ceph-mon[50695]: Deploying cephadm binary to vm01 2026-03-10T07:01:45.859 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Added host 'vm01' with addr '192.168.123.101' 2026-03-10T07:01:45.859 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mon service... 2026-03-10T07:01:46.165 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T07:01:46.165 INFO:teuthology.orchestra.run.vm01.stdout:Deploying unmanaged mgr service... 2026-03-10T07:01:46.426 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: Added host vm01 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: Saving service mon spec with placement count:5 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:46.950 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:46 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/857674305' entity='client.admin' 2026-03-10T07:01:46.969 INFO:teuthology.orchestra.run.vm01.stdout:Enabling the dashboard module... 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:47 vm01 ceph-mon[50695]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:47 vm01 ceph-mon[50695]: Saving service mgr spec with placement count:2 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:47 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1551507601' entity='client.admin' 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:47 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/13209515' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:47 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:47 vm01 ceph-mon[50695]: from='mgr.14118 192.168.123.101:0/3114904449' entity='mgr.a' 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: ignoring --setuser ceph since I am not root 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: ignoring --setgroup ceph since I am not root 2026-03-10T07:01:48.201 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:48.198+0000 7fa034c1e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for the mgr to restart... 2026-03-10T07:01:48.429 INFO:teuthology.orchestra.run.vm01.stdout:Waiting for mgr epoch 9... 2026-03-10T07:01:48.453 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:48.253+0000 7fa034c1e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:01:48.725 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:48.724+0000 7fa034c1e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:01:49.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/13209515' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T07:01:49.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-mon[50695]: mgrmap e9: a(active, since 7s) 2026-03-10T07:01:49.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:48 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/3496374119' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T07:01:49.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:49.106+0000 7fa034c1e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:01:49.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:01:49.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:01:49.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: from numpy import show_config as show_numpy_config 2026-03-10T07:01:49.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:49.195+0000 7fa034c1e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:01:49.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:49.236+0000 7fa034c1e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:01:49.529 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:49.314+0000 7fa034c1e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:01:50.138 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:49.864+0000 7fa034c1e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:01:50.138 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:49.980+0000 7fa034c1e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:01:50.138 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.021+0000 7fa034c1e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:01:50.138 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.057+0000 7fa034c1e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:01:50.138 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.099+0000 7fa034c1e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:01:50.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.138+0000 7fa034c1e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:01:50.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.320+0000 7fa034c1e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:01:50.528 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.377+0000 7fa034c1e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:01:50.903 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.606+0000 7fa034c1e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:01:51.183 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.902+0000 7fa034c1e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:01:51.183 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.941+0000 7fa034c1e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:01:51.183 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:50 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:50.984+0000 7fa034c1e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:01:51.183 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:51.064+0000 7fa034c1e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:01:51.183 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:51.102+0000 7fa034c1e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:01:51.439 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:51.182+0000 7fa034c1e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:01:51.439 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:51.299+0000 7fa034c1e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: Active manager daemon a restarted 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: Activating manager daemon a 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: osdmap e3: 0 total, 0 up, 0 in 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: mgrmap e10: a(active, starting, since 0.0059195s) 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: Manager daemon a is now available 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:51.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:51.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T07:01:51.779 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:51.438+0000 7fa034c1e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:01:51.779 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:01:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:01:51.475+0000 7fa034c1e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:01:52.521 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout { 2026-03-10T07:01:52.521 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T07:01:52.521 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T07:01:52.521 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout } 2026-03-10T07:01:52.521 INFO:teuthology.orchestra.run.vm01.stdout:mgr epoch 9 is available 2026-03-10T07:01:52.521 INFO:teuthology.orchestra.run.vm01.stdout:Generating a dashboard self-signed certificate... 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.agent.vm01", "caps": []}]: dispatch 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.agent.vm01", "caps": []}]': finished 2026-03-10T07:01:52.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:52 vm01 ceph-mon[50695]: mgrmap e11: a(active, since 1.01932s) 2026-03-10T07:01:52.876 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T07:01:52.876 INFO:teuthology.orchestra.run.vm01.stdout:Creating initial admin user... 2026-03-10T07:01:53.297 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$VjgqwzPMwzIwh5zrtNtOqObBTu1FMc72EhsuxKjqVuQZwDilaU4dO", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773126113, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T07:01:53.297 INFO:teuthology.orchestra.run.vm01.stdout:Fetching dashboard port number... 2026-03-10T07:01:53.576 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T07:01:53.577 INFO:teuthology.orchestra.run.vm01.stdout:firewalld does not appear to be present 2026-03-10T07:01:53.577 INFO:teuthology.orchestra.run.vm01.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout:Ceph Dashboard is now available at: 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout: URL: https://vm01.local:8443/ 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout: User: admin 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout: Password: 3hy1jvew9h 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.579 INFO:teuthology.orchestra.run.vm01.stdout:Saving cluster configuration to /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config directory 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T07:01:53.903 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout: ceph telemetry on 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout:For more information see: 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:53.904 INFO:teuthology.orchestra.run.vm01.stdout:Bootstrap complete. 2026-03-10T07:01:53.951 INFO:tasks.cephadm:Fetching config... 2026-03-10T07:01:53.951 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:01:53.951 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T07:01:53.972 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T07:01:53.972 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:01:53.972 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T07:01:54.029 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T07:01:54.029 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:01:54.029 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/keyring of=/dev/stdout 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:52] ENGINE Bus STARTING 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:52] ENGINE Serving on https://192.168.123.101:7150 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:52] ENGINE Client ('192.168.123.101', 55694) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:52] ENGINE Serving on http://192.168.123.101:8765 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: [10/Mar/2026:07:01:52] ENGINE Bus STARTED 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: Deploying daemon agent.vm01 on vm01 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:54.093 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:53 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2974927088' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T07:01:54.108 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T07:01:54.460 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:01:54.902 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T07:01:54.902 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1655861695' entity='client.admin' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: mgrmap e12: a(active, since 2s) 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:55.030 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.030 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.030 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/4243331003' entity='client.admin' 2026-03-10T07:01:55.030 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:55.030 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:54 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:55.134 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:01:55.592 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-10T07:01:55.593 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:01:55.593 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-10T07:01:55.608 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:01:55.608 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:01:55.662 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-10T07:01:55.662 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch host add vm08 2026-03-10T07:01:55.933 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: Updating vm01:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: Updating vm01:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:56.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:56 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:57.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:57 vm01 ceph-mon[50695]: Updating vm01:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.client.admin.keyring 2026-03-10T07:01:57.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:57 vm01 ceph-mon[50695]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:01:57.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:57 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:57.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:57 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:57.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:57 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:01:58.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:58 vm01 ceph-mon[50695]: Deploying cephadm binary to vm08 2026-03-10T07:01:58.861 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:58 vm01 ceph-mon[50695]: mgrmap e13: a(active, since 6s) 2026-03-10T07:01:58.862 INFO:teuthology.orchestra.run.vm01.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-10T07:01:58.915 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch host ls --format=json 2026-03-10T07:01:59.108 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:01:59.362 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:01:59.362 INFO:teuthology.orchestra.run.vm01.stdout:[{"addr": "192.168.123.101", "hostname": "vm01", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-10T07:01:59.419 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T07:01:59.419 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd crush tunables default 2026-03-10T07:01:59.610 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: Added host vm08 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.agent.vm08", "caps": []}]: dispatch 2026-03-10T07:02:00.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:01:59 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "client.agent.vm08", "caps": []}]': finished 2026-03-10T07:02:00.867 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-10T07:02:00.918 INFO:tasks.cephadm:Adding mon.a on vm01 2026-03-10T07:02:00.919 INFO:tasks.cephadm:Adding mon.b on vm08 2026-03-10T07:02:00.919 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch apply mon '2;vm01:192.168.123.101=a;vm08:192.168.123.108=b' 2026-03-10T07:02:01.146 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: Updating vm08:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: Updating vm08:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.client.admin.keyring 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/908206033' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T07:02:01.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:01.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:00 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:01.458 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-10T07:02:01.546 DEBUG:teuthology.orchestra.run.vm08:mon.b> sudo journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.b.service 2026-03-10T07:02:01.547 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:02:01.547 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph mon dump -f json 2026-03-10T07:02:01.895 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:02.217 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:02:02.217 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"c470e80c-1c4e-11f1-89aa-7f5873752d90","modified":"2026-03-10T07:01:26.440497Z","created":"2026-03-10T07:01:26.440497Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T07:02:02.218 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: Deploying daemon agent.vm08 on vm08 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/908206033' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:02.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:01 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:03.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:02 vm01 ceph-mon[50695]: from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "2;vm01:192.168.123.101=a;vm08:192.168.123.108=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:03.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:02 vm01 ceph-mon[50695]: Saving service mon spec with placement vm01:192.168.123.101=a;vm08:192.168.123.108=b;count:2 2026-03-10T07:02:03.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:02 vm01 ceph-mon[50695]: Deploying daemon mon.b on vm08 2026-03-10T07:02:03.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:02 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/2474789389' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:02:03.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:02 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:03.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:02 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:03.281 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:02:03.281 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph mon dump -f json 2026-03-10T07:02:03.571 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.b/config 2026-03-10T07:02:04.012 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:02:04.012 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":1,"fsid":"c470e80c-1c4e-11f1-89aa-7f5873752d90","modified":"2026-03-10T07:01:26.440497Z","created":"2026-03-10T07:01:26.440497Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T07:02:04.012 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 1 2026-03-10T07:02:04.516 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:04 vm08 ceph-mon[51337]: mon.b@-1(synchronizing).paxosservice(auth 1..5) refresh upgraded, format 0 -> 3 2026-03-10T07:02:04.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:04 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/3481168801' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:02:05.096 INFO:tasks.cephadm:Waiting for 2 mons in monmap... 2026-03-10T07:02:05.096 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph mon dump -f json 2026-03-10T07:02:05.362 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.b/config 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: mon.a calling monitor election 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: mon.b calling monitor election 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: monmap epoch 2 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: last_changed 2026-03-10T07:02:04.268434+0000 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: created 2026-03-10T07:01:26.440497+0000 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: min_mon_release 19 (squid) 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: election_strategy: 1 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: fsmap 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: mgrmap e13: a(active, since 17s) 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: overall HEALTH_OK 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:09.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:09 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: mon.a calling monitor election 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: mon.b calling monitor election 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: monmap epoch 2 2026-03-10T07:02:10.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: last_changed 2026-03-10T07:02:04.268434+0000 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: created 2026-03-10T07:01:26.440497+0000 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: min_mon_release 19 (squid) 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: election_strategy: 1 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: 0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.b 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: fsmap 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: osdmap e4: 0 total, 0 up, 0 in 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: mgrmap e13: a(active, since 17s) 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: overall HEALTH_OK 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:10.029 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:09 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.123 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:02:10.124 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":2,"fsid":"c470e80c-1c4e-11f1-89aa-7f5873752d90","modified":"2026-03-10T07:02:04.268434Z","created":"2026-03-10T07:01:26.440497Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:3300","nonce":0},{"type":"v1","addr":"192.168.123.101:6789","nonce":0}]},"addr":"192.168.123.101:6789/0","public_addr":"192.168.123.101:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T07:02:10.124 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 2 2026-03-10T07:02:10.179 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T07:02:10.180 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph config generate-minimal-conf 2026-03-10T07:02:10.406 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:10.641 INFO:teuthology.orchestra.run.vm01.stdout:# minimal ceph.conf for c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:02:10.642 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-10T07:02:10.642 INFO:teuthology.orchestra.run.vm01.stdout: fsid = c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:02:10.642 INFO:teuthology.orchestra.run.vm01.stdout: mon_host = [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T07:02:10.694 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T07:02:10.694 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:02:10.694 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T07:02:10.718 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:02:10.718 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Reconfiguring daemon mon.a on vm01 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Reconfiguring mon.b (monmap changed)... 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Reconfiguring daemon mon.b on vm08 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/1584279155' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:10 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.781 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:02:10.781 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T07:02:10.807 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:02:10.807 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T07:02:10.869 INFO:tasks.cephadm:Adding mgr.a on vm01 2026-03-10T07:02:10.869 INFO:tasks.cephadm:Adding mgr.b on vm08 2026-03-10T07:02:10.869 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch apply mgr '2;vm01=a;vm08=b' 2026-03-10T07:02:10.929 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Reconfiguring mon.a (unknown last config time)... 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Reconfiguring daemon mon.a on vm01 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Reconfiguring mon.b (monmap changed)... 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Reconfiguring daemon mon.b on vm08 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Updating vm01:/etc/ceph/ceph.conf 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: Updating vm08:/etc/ceph/ceph.conf 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='client.? 192.168.123.108:0/1584279155' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:10.930 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:10 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:11.071 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.b/config 2026-03-10T07:02:11.263 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a[50905]: 2026-03-10T07:02:11.262+0000 7fa000f81640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T07:02:11.312 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mgr update... 2026-03-10T07:02:11.368 DEBUG:teuthology.orchestra.run.vm08:mgr.b> sudo journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.b.service 2026-03-10T07:02:11.410 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T07:02:11.410 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:02:11.410 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T07:02:11.424 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:02:11.425 DEBUG:teuthology.orchestra.run.vm01:> ls /dev/[sv]d? 2026-03-10T07:02:11.479 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vda 2026-03-10T07:02:11.480 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdb 2026-03-10T07:02:11.480 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdc 2026-03-10T07:02:11.480 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vdd 2026-03-10T07:02:11.480 INFO:teuthology.orchestra.run.vm01.stdout:/dev/vde 2026-03-10T07:02:11.480 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T07:02:11.480 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T07:02:11.480 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdb 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdb 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 07:01:56.271305381 +0000 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:59:02.213414835 +0000 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:59:02.213414835 +0000 2026-03-10T07:02:11.540 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:56:23.228000000 +0000 2026-03-10T07:02:11.540 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T07:02:11.608 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T07:02:11.608 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T07:02:11.608 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000163105 s, 3.1 MB/s 2026-03-10T07:02:11.610 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T07:02:11.671 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdc 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdc 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 07:01:56.282305367 +0000 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:59:02.156414785 +0000 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:59:02.156414785 +0000 2026-03-10T07:02:11.729 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:56:23.235000000 +0000 2026-03-10T07:02:11.730 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T07:02:11.792 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T07:02:11.792 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T07:02:11.792 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000123551 s, 4.1 MB/s 2026-03-10T07:02:11.793 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T07:02:11.850 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vdd 2026-03-10T07:02:11.872 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: Updating vm01:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:11.872 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: Updating vm08:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:11.872 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1933903449' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:02:11.873 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:11 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vdd 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 07:01:56.289305358 +0000 2026-03-10T07:02:11.905 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:59:02.200414824 +0000 2026-03-10T07:02:11.906 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:59:02.200414824 +0000 2026-03-10T07:02:11.906 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:56:23.238000000 +0000 2026-03-10T07:02:11.906 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T07:02:11.967 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: Updating vm01:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: Updating vm08:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/config/ceph.conf 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1933903449' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:02:11.968 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:11 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:11.970 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T07:02:11.970 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T07:02:11.970 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000154418 s, 3.3 MB/s 2026-03-10T07:02:11.971 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T07:02:12.030 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vde 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vde 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-10 07:01:56.301305342 +0000 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-10 06:59:02.163414791 +0000 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-10 06:59:02.163414791 +0000 2026-03-10T07:02:12.089 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-10 06:56:23.244000000 +0000 2026-03-10T07:02:12.089 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T07:02:12.167 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-10T07:02:12.167 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-10T07:02:12.167 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 8.6272e-05 s, 5.9 MB/s 2026-03-10T07:02:12.167 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T07:02:12.213 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:12 vm08 systemd[1]: Started Ceph mgr.b for c470e80c-1c4e-11f1-89aa-7f5873752d90. 2026-03-10T07:02:12.242 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:02:12.242 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T07:02:12.271 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:02:12.271 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-10T07:02:12.334 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-10T07:02:12.334 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-10T07:02:12.334 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-10T07:02:12.334 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-10T07:02:12.334 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-10T07:02:12.335 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T07:02:12.335 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T07:02:12.335 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 254 Links: 1 Device type: fc,10 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 07:02:05.793719018 +0000 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:59:02.198418964 +0000 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:59:02.198418964 +0000 2026-03-10T07:02:12.394 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:56:48.216000000 +0000 2026-03-10T07:02:12.395 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T07:02:12.465 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:12.211+0000 7f7905af1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T07:02:12.465 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:12.265+0000 7f7905af1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T07:02:12.468 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T07:02:12.468 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T07:02:12.468 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000131346 s, 3.9 MB/s 2026-03-10T07:02:12.469 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T07:02:12.534 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 255 Links: 1 Device type: fc,20 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 07:02:05.797719020 +0000 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:59:02.207418970 +0000 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:59:02.207418970 +0000 2026-03-10T07:02:12.596 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:56:48.219000000 +0000 2026-03-10T07:02:12.597 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T07:02:12.677 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T07:02:12.678 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T07:02:12.678 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000188392 s, 2.7 MB/s 2026-03-10T07:02:12.679 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T07:02:12.746 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: Saving service mgr spec with placement vm01=a;vm08=b;count:2 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: Deploying daemon mgr.b on vm08 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.761 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:12.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='client.14196 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm01=a;vm08=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: Saving service mgr spec with placement vm01=a;vm08=b;count:2 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: Deploying daemon mgr.b on vm08 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:12.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:12 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 256 Links: 1 Device type: fc,30 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 07:02:05.801719022 +0000 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:59:02.223418980 +0000 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:59:02.223418980 +0000 2026-03-10T07:02:12.790 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:56:48.223000000 +0000 2026-03-10T07:02:12.791 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T07:02:12.875 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T07:02:12.875 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T07:02:12.875 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.00301565 s, 170 kB/s 2026-03-10T07:02:12.876 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T07:02:12.911 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 512 block special file 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout:Device: 6h/6d Inode: 257 Links: 1 Device type: fc,40 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout:Context: system_u:object_r:fixed_disk_device_t:s0 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 07:02:05.805719025 +0000 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 06:59:02.205418969 +0000 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 06:59:02.205418969 +0000 2026-03-10T07:02:12.982 INFO:teuthology.orchestra.run.vm08.stdout: Birth: 2026-03-10 06:56:48.291000000 +0000 2026-03-10T07:02:12.982 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T07:02:13.059 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:12 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:12.812+0000 7f7905af1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T07:02:13.063 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T07:02:13.063 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T07:02:13.063 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000134832 s, 3.8 MB/s 2026-03-10T07:02:13.064 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T07:02:13.139 INFO:tasks.cephadm:Deploying osd.0 on vm01 with /dev/vde... 2026-03-10T07:02:13.139 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- lvm zap /dev/vde 2026-03-10T07:02:13.301 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:13.642 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:13.328+0000 7f7905af1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T07:02:13.642 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T07:02:13.642 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T07:02:13.642 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: from numpy import show_config as show_numpy_config 2026-03-10T07:02:13.642 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:13.469+0000 7f7905af1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T07:02:13.642 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:13.548+0000 7f7905af1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T07:02:13.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-mon[51337]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T07:02:13.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-mon[51337]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T07:02:13.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-mon[51337]: Reconfiguring daemon mgr.a on vm01 2026-03-10T07:02:13.644 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-mon[51337]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T07:02:13.686 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:13 vm01 ceph-mon[50695]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T07:02:13.686 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:13 vm01 ceph-mon[50695]: Reconfiguring mgr.a (unknown last config time)... 2026-03-10T07:02:13.686 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:13 vm01 ceph-mon[50695]: Reconfiguring daemon mgr.a on vm01 2026-03-10T07:02:13.686 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:13 vm01 ceph-mon[50695]: Metadata not up to date on all hosts. Skipping non agent specs 2026-03-10T07:02:13.859 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:13.874 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch daemon add osd vm01:/dev/vde 2026-03-10T07:02:14.011 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:13 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:13.665+0000 7f7905af1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T07:02:14.049 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:14.511 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.246+0000 7f7905af1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T07:02:14.512 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.365+0000 7f7905af1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:02:14.512 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.408+0000 7f7905af1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T07:02:14.512 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.446+0000 7f7905af1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T07:02:14.512 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.490+0000 7f7905af1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:02:14.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:02:14.780 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:14.780 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:14 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.528+0000 7f7905af1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.704+0000 7f7905af1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.756+0000 7f7905af1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:14.990 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:15.261 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:14 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:14.989+0000 7f7905af1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T07:02:15.575 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.276+0000 7f7905af1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T07:02:15.576 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.314+0000 7f7905af1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T07:02:15.576 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.358+0000 7f7905af1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T07:02:15.576 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.439+0000 7f7905af1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T07:02:15.576 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.481+0000 7f7905af1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='client.14204 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/403923871' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7023fb55-8550-4ce2-aaa9-049fb53fef83"}]: dispatch 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/403923871' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7023fb55-8550-4ce2-aaa9-049fb53fef83"}]': finished 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:15.709 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.574+0000 7f7905af1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.693+0000 7f7905af1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='client.14204 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm01:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/403923871' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7023fb55-8550-4ce2-aaa9-049fb53fef83"}]: dispatch 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/403923871' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7023fb55-8550-4ce2-aaa9-049fb53fef83"}]': finished 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: osdmap e5: 1 total, 0 up, 1 in 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:15.832 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/2669908944' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:02:16.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:16.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:15 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2669908944' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:02:16.262 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.831+0000 7f7905af1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T07:02:16.262 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:15 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-b[52626]: 2026-03-10T07:02:15.868+0000 7f7905af1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T07:02:17.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:16 vm08 ceph-mon[51337]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:17.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:16 vm08 ceph-mon[51337]: Standby manager daemon b started 2026-03-10T07:02:17.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:16 vm08 ceph-mon[51337]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T07:02:17.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:16 vm08 ceph-mon[51337]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:02:17.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:16 vm08 ceph-mon[51337]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T07:02:17.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:16 vm08 ceph-mon[51337]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:02:17.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:16 vm01 ceph-mon[50695]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:17.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:16 vm01 ceph-mon[50695]: Standby manager daemon b started 2026-03-10T07:02:17.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:16 vm01 ceph-mon[50695]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T07:02:17.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:16 vm01 ceph-mon[50695]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T07:02:17.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:16 vm01 ceph-mon[50695]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T07:02:17.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:16 vm01 ceph-mon[50695]: from='mgr.? 192.168.123.108:0/100582751' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T07:02:18.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:17 vm08 ceph-mon[51337]: mgrmap e14: a(active, since 25s), standbys: b 2026-03-10T07:02:18.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:17 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T07:02:18.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:17 vm01 ceph-mon[50695]: mgrmap e14: a(active, since 25s), standbys: b 2026-03-10T07:02:18.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:17 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T07:02:18.996 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:18 vm01 ceph-mon[50695]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:19.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:18 vm08 ceph-mon[51337]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:19.739 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:19 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:02:19.740 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:19 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:20.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:19 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T07:02:20.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:19 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:20.825 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:20 vm01 ceph-mon[50695]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:20.826 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:20 vm01 ceph-mon[50695]: Deploying daemon osd.0 on vm01 2026-03-10T07:02:21.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:20 vm08 ceph-mon[51337]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:21.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:20 vm08 ceph-mon[51337]: Deploying daemon osd.0 on vm01 2026-03-10T07:02:22.854 INFO:teuthology.orchestra.run.vm01.stdout:Created osd(s) 0 on host 'vm01' 2026-03-10T07:02:22.942 DEBUG:teuthology.orchestra.run.vm01:osd.0> sudo journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@osd.0.service 2026-03-10T07:02:22.943 INFO:tasks.cephadm:Deploying osd.1 on vm08 with /dev/vde... 2026-03-10T07:02:22.943 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- lvm zap /dev/vde 2026-03-10T07:02:23.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:23.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:23.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:23.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:23.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:22 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:22 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.122 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.b/config 2026-03-10T07:02:23.529 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:23 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0[58371]: 2026-03-10T07:02:23.211+0000 7f7785ec8740 -1 osd.0 0 log_to_monitors true 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:23.851 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:23 vm08 ceph-mon[51337]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:02:23.942 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:02:23.965 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph orch daemon add osd vm08:/dev/vde 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm01", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:24.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:23 vm01 ceph-mon[50695]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T07:02:24.145 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.b/config 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:02:25.229 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:24 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: osdmap e6: 1 total, 0 up, 1 in 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]: dispatch 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T07:02:25.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:24 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:25.278 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:24 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0[58371]: 2026-03-10T07:02:24.869+0000 7f778265c640 -1 osd.0 0 waiting for initial osdmap 2026-03-10T07:02:25.278 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:24 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0[58371]: 2026-03-10T07:02:24.875+0000 7f777dc73640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T07:02:26.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='client.24111 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='client.? 192.168.123.108:0/1956669552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38607e04-3d56-461c-86d0-672e4169aac4"}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38607e04-3d56-461c-86d0-672e4169aac4"}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38607e04-3d56-461c-86d0-672e4169aac4"}]': finished 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068] boot 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:26.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:25 vm08 ceph-mon[51337]: from='client.? 192.168.123.108:0/858070045' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='client.24111 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm01", "root=default"]}]': finished 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: osdmap e7: 1 total, 0 up, 1 in 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/1956669552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38607e04-3d56-461c-86d0-672e4169aac4"}]: dispatch 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "38607e04-3d56-461c-86d0-672e4169aac4"}]: dispatch 2026-03-10T07:02:26.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "38607e04-3d56-461c-86d0-672e4169aac4"}]': finished 2026-03-10T07:02:26.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: osd.0 [v2:192.168.123.101:6802/3385215068,v1:192.168.123.101:6803/3385215068] boot 2026-03-10T07:02:26.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: osdmap e8: 2 total, 1 up, 2 in 2026-03-10T07:02:26.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T07:02:26.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:26.279 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:25 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/858070045' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T07:02:27.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:26 vm08 ceph-mon[51337]: purged_snaps scrub starts 2026-03-10T07:02:27.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:26 vm08 ceph-mon[51337]: purged_snaps scrub ok 2026-03-10T07:02:27.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:26 vm08 ceph-mon[51337]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:27.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:26 vm08 ceph-mon[51337]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T07:02:27.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:26 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:27.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:26 vm01 ceph-mon[50695]: purged_snaps scrub starts 2026-03-10T07:02:27.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:26 vm01 ceph-mon[50695]: purged_snaps scrub ok 2026-03-10T07:02:27.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:26 vm01 ceph-mon[50695]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T07:02:27.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:26 vm01 ceph-mon[50695]: osdmap e9: 2 total, 1 up, 2 in 2026-03-10T07:02:27.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:26 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:29.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:28 vm08 ceph-mon[51337]: pgmap v17: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:29.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:28 vm01 ceph-mon[50695]: pgmap v17: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:29.886 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:29 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:02:29.886 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:29 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:30.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:29 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T07:02:30.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:29 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:30.978 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:30 vm08 ceph-mon[51337]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:30.979 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:30 vm08 ceph-mon[51337]: Deploying daemon osd.1 on vm08 2026-03-10T07:02:31.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:30 vm01 ceph-mon[50695]: pgmap v18: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:31.278 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:30 vm01 ceph-mon[50695]: Deploying daemon osd.1 on vm08 2026-03-10T07:02:33.354 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 1 on host 'vm08' 2026-03-10T07:02:33.417 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: pgmap v19: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:33.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:33.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:33.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:33.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:33.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:33.418 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:33 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: pgmap v19: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:33.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:33 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:33.548 DEBUG:teuthology.orchestra.run.vm08:osd.1> sudo journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@osd.1.service 2026-03-10T07:02:33.550 INFO:tasks.cephadm:Waiting for 2 OSDs to come up... 2026-03-10T07:02:33.550 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd stat -f json 2026-03-10T07:02:33.725 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:33.995 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:34.088 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":9,"num_osds":2,"num_up_osds":1,"osd_up_since":1773126145,"num_in_osds":2,"osd_in_since":1773126145,"num_remapped_pgs":0} 2026-03-10T07:02:34.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='osd.1 [v2:192.168.123.108:6800/2906530338,v1:192.168.123.108:6801/2906530338]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/3424818804' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: Detected new or changed devices on vm08 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:34.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:34 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: pgmap v20: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='osd.1 [v2:192.168.123.108:6800/2906530338,v1:192.168.123.108:6801/2906530338]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/3424818804' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: Detected new or changed devices on vm08 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:34.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:34 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.089 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd stat -f json 2026-03-10T07:02:35.269 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:35.393 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.393 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='osd.1 [v2:192.168.123.108:6800/2906530338,v1:192.168.123.108:6801/2906530338]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:35.394 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:35 vm01 ceph-mon[50695]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T07:02:35.539 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:35.639 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":11,"num_osds":2,"num_up_osds":1,"osd_up_since":1773126145,"num_in_osds":2,"osd_in_since":1773126145,"num_remapped_pgs":0} 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='osd.1 [v2:192.168.123.108:6800/2906530338,v1:192.168.123.108:6801/2906530338]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: osdmap e10: 2 total, 1 up, 2 in 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:35.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:35 vm08 ceph-mon[51337]: from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2845849618' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: Detected new or changed devices on vm01 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:36.528 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.529 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:36 vm01 ceph-mon[50695]: from='osd.1 ' entity='osd.1' 2026-03-10T07:02:36.639 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd stat -f json 2026-03-10T07:02:36.762 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:02:36 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1[55475]: 2026-03-10T07:02:36.282+0000 7f7536680640 -1 osd.1 0 waiting for initial osdmap 2026-03-10T07:02:36.762 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:02:36 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1[55475]: 2026-03-10T07:02:36.286+0000 7f7531ca9640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: osdmap e11: 2 total, 1 up, 2 in 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: pgmap v23: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/2845849618' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: Detected new or changed devices on vm01 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' 2026-03-10T07:02:36.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:36 vm08 ceph-mon[51337]: from='osd.1 ' entity='osd.1' 2026-03-10T07:02:36.802 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:37.153 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:37.347 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":11,"num_osds":2,"num_up_osds":1,"osd_up_since":1773126145,"num_in_osds":2,"osd_in_since":1773126145,"num_remapped_pgs":0} 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: purged_snaps scrub starts 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: purged_snaps scrub ok 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1848064941' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: osd.1 [v2:192.168.123.108:6800/2906530338,v1:192.168.123.108:6801/2906530338] boot 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: osdmap e12: 2 total, 2 up, 2 in 2026-03-10T07:02:37.472 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:37 vm01 ceph-mon[50695]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: purged_snaps scrub starts 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: purged_snaps scrub ok 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1848064941' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: osd.1 [v2:192.168.123.108:6800/2906530338,v1:192.168.123.108:6801/2906530338] boot 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: osdmap e12: 2 total, 2 up, 2 in 2026-03-10T07:02:37.762 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:37 vm08 ceph-mon[51337]: from='mgr.14150 192.168.123.101:0/1682267125' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T07:02:38.348 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd stat -f json 2026-03-10T07:02:38.551 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:38.631 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:38 vm01 ceph-mon[50695]: pgmap v25: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:38.911 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:39.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:38 vm08 ceph-mon[51337]: pgmap v25: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:39.480 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1773126157,"num_in_osds":2,"osd_in_since":1773126145,"num_remapped_pgs":0} 2026-03-10T07:02:39.481 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd dump --format=json 2026-03-10T07:02:39.686 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:39.744 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:39 vm01 ceph-mon[50695]: osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:02:39.744 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:39 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2065707351' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:39.930 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:39.930 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"fsid":"c470e80c-1c4e-11f1-89aa-7f5873752d90","created":"2026-03-10T07:01:28.543694+0000","modified":"2026-03-10T07:02:38.425578+0000","last_up_change":"2026-03-10T07:02:37.287822+0000","last_in_change":"2026-03-10T07:02:25.177426+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"7023fb55-8550-4ce2-aaa9-049fb53fef83","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6803","nonce":3385215068}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6805","nonce":3385215068}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6809","nonce":3385215068}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6807","nonce":3385215068}]},"public_addr":"192.168.123.101:6803/3385215068","cluster_addr":"192.168.123.101:6805/3385215068","heartbeat_back_addr":"192.168.123.101:6809/3385215068","heartbeat_front_addr":"192.168.123.101:6807/3385215068","state":["exists","up"]},{"osd":1,"uuid":"38607e04-3d56-461c-86d0-672e4169aac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6801","nonce":2906530338}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6803","nonce":2906530338}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6807","nonce":2906530338}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6805","nonce":2906530338}]},"public_addr":"192.168.123.108:6801/2906530338","cluster_addr":"192.168.123.108:6803/2906530338","heartbeat_back_addr":"192.168.123.108:6807/2906530338","heartbeat_front_addr":"192.168.123.108:6805/2906530338","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:02:24.255811+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:02:34.500685+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/3274356286":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/645123104":"2026-03-11T07:01:51.477842+0000","192.168.123.101:6801/92363254":"2026-03-11T07:01:40.627448+0000","192.168.123.101:6800/92363254":"2026-03-11T07:01:40.627448+0000","192.168.123.101:0/2823543715":"2026-03-11T07:01:40.627448+0000","192.168.123.101:6801/2082331818":"2026-03-11T07:01:51.477842+0000","192.168.123.101:6800/2082331818":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/356405364":"2026-03-11T07:01:40.627448+0000","192.168.123.101:0/1080866435":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/2772545335":"2026-03-11T07:01:40.627448+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:02:40.005 INFO:tasks.cephadm.ceph_manager.ceph:[] 2026-03-10T07:02:40.006 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T07:02:40.006 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T07:02:40.011 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:39 vm08 ceph-mon[51337]: osdmap e13: 2 total, 2 up, 2 in 2026-03-10T07:02:40.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:39 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/2065707351' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T07:02:40.199 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:40.464 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-10T07:02:40.465 INFO:teuthology.orchestra.run.vm01.stdout: key = AQAQwq9pH41UGxAAwAFxbHdTGIK18tu5OAITFw== 2026-03-10T07:02:40.535 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-10T07:02:40.536 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T07:02:40.536 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T07:02:40.572 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T07:02:40.772 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.b/config 2026-03-10T07:02:40.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:40 vm01 ceph-mon[50695]: pgmap v27: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:40.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:40 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2052769179' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:02:40.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:40 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/3654661112' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:02:40.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:40 vm01 ceph-mon[50695]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:02:40.779 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:40 vm01 ceph-mon[50695]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:02:40.846 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:40 vm08 ceph-mon[51337]: pgmap v27: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:40.846 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:40 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/2052769179' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:02:40.846 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:40 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/3654661112' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:02:40.846 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:40 vm08 ceph-mon[51337]: from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:02:40.846 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:40 vm08 ceph-mon[51337]: from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:02:41.080 INFO:teuthology.orchestra.run.vm08.stdout:[client.1] 2026-03-10T07:02:41.080 INFO:teuthology.orchestra.run.vm08.stdout: key = AQARwq9pGy96BBAA31avw+85NRBouJ+9u5hzFw== 2026-03-10T07:02:41.156 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T07:02:41.156 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T07:02:41.156 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T07:02:41.207 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T07:02:41.215 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T07:02:41.215 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph mgr dump --format=json 2026-03-10T07:02:41.390 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:41.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:41 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/272850882' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:02:41.778 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:41 vm01 ceph-mon[50695]: from='client.? 192.168.123.108:0/272850882' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:02:41.897 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:42.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:41 vm08 ceph-mon[51337]: from='client.? 192.168.123.108:0/272850882' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T07:02:42.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:41 vm08 ceph-mon[51337]: from='client.? 192.168.123.108:0/272850882' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T07:02:42.669 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":3056465443},{"type":"v1","addr":"192.168.123.101:6801","nonce":3056465443}]},"active_addr":"192.168.123.101:6801/3056465443","active_change":"2026-03-10T07:01:51.477953+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":14200,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.101:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3357310368}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2105551821}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1671052028}]}]} 2026-03-10T07:02:42.671 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T07:02:42.671 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T07:02:42.671 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd dump --format=json 2026-03-10T07:02:42.859 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:42.976 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:42 vm01 ceph-mon[50695]: pgmap v28: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:42.993 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:42 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/2359508112' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:02:43.012 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:42 vm08 ceph-mon[51337]: pgmap v28: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:43.040 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:42 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/2359508112' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T07:02:43.363 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:43.364 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"fsid":"c470e80c-1c4e-11f1-89aa-7f5873752d90","created":"2026-03-10T07:01:28.543694+0000","modified":"2026-03-10T07:02:38.425578+0000","last_up_change":"2026-03-10T07:02:37.287822+0000","last_in_change":"2026-03-10T07:02:25.177426+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"7023fb55-8550-4ce2-aaa9-049fb53fef83","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6803","nonce":3385215068}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6805","nonce":3385215068}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6809","nonce":3385215068}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6807","nonce":3385215068}]},"public_addr":"192.168.123.101:6803/3385215068","cluster_addr":"192.168.123.101:6805/3385215068","heartbeat_back_addr":"192.168.123.101:6809/3385215068","heartbeat_front_addr":"192.168.123.101:6807/3385215068","state":["exists","up"]},{"osd":1,"uuid":"38607e04-3d56-461c-86d0-672e4169aac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6801","nonce":2906530338}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6803","nonce":2906530338}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6807","nonce":2906530338}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6805","nonce":2906530338}]},"public_addr":"192.168.123.108:6801/2906530338","cluster_addr":"192.168.123.108:6803/2906530338","heartbeat_back_addr":"192.168.123.108:6807/2906530338","heartbeat_front_addr":"192.168.123.108:6805/2906530338","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:02:24.255811+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:02:34.500685+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/3274356286":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/645123104":"2026-03-11T07:01:51.477842+0000","192.168.123.101:6801/92363254":"2026-03-11T07:01:40.627448+0000","192.168.123.101:6800/92363254":"2026-03-11T07:01:40.627448+0000","192.168.123.101:0/2823543715":"2026-03-11T07:01:40.627448+0000","192.168.123.101:6801/2082331818":"2026-03-11T07:01:51.477842+0000","192.168.123.101:6800/2082331818":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/356405364":"2026-03-11T07:01:40.627448+0000","192.168.123.101:0/1080866435":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/2772545335":"2026-03-11T07:01:40.627448+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:02:43.438 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T07:02:43.439 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd dump --format=json 2026-03-10T07:02:43.614 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:43.862 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:43.862 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":13,"fsid":"c470e80c-1c4e-11f1-89aa-7f5873752d90","created":"2026-03-10T07:01:28.543694+0000","modified":"2026-03-10T07:02:38.425578+0000","last_up_change":"2026-03-10T07:02:37.287822+0000","last_in_change":"2026-03-10T07:02:25.177426+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":6,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":2,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"7023fb55-8550-4ce2-aaa9-049fb53fef83","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6803","nonce":3385215068}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6805","nonce":3385215068}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6809","nonce":3385215068}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":3385215068},{"type":"v1","addr":"192.168.123.101:6807","nonce":3385215068}]},"public_addr":"192.168.123.101:6803/3385215068","cluster_addr":"192.168.123.101:6805/3385215068","heartbeat_back_addr":"192.168.123.101:6809/3385215068","heartbeat_front_addr":"192.168.123.101:6807/3385215068","state":["exists","up"]},{"osd":1,"uuid":"38607e04-3d56-461c-86d0-672e4169aac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6801","nonce":2906530338}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6803","nonce":2906530338}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6807","nonce":2906530338}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":2906530338},{"type":"v1","addr":"192.168.123.108:6805","nonce":2906530338}]},"public_addr":"192.168.123.108:6801/2906530338","cluster_addr":"192.168.123.108:6803/2906530338","heartbeat_back_addr":"192.168.123.108:6807/2906530338","heartbeat_front_addr":"192.168.123.108:6805/2906530338","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:02:24.255811+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T07:02:34.500685+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.101:0/3274356286":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/645123104":"2026-03-11T07:01:51.477842+0000","192.168.123.101:6801/92363254":"2026-03-11T07:01:40.627448+0000","192.168.123.101:6800/92363254":"2026-03-11T07:01:40.627448+0000","192.168.123.101:0/2823543715":"2026-03-11T07:01:40.627448+0000","192.168.123.101:6801/2082331818":"2026-03-11T07:01:51.477842+0000","192.168.123.101:6800/2082331818":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/356405364":"2026-03-11T07:01:40.627448+0000","192.168.123.101:0/1080866435":"2026-03-11T07:01:51.477842+0000","192.168.123.101:0/2772545335":"2026-03-11T07:01:40.627448+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T07:02:43.892 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:43 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1634232653' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:02:43.928 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph tell osd.0 flush_pg_stats 2026-03-10T07:02:43.928 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph tell osd.1 flush_pg_stats 2026-03-10T07:02:44.113 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:44.141 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:44.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:43 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1634232653' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:02:44.338 INFO:teuthology.orchestra.run.vm01.stdout:34359738373 2026-03-10T07:02:44.338 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd last-stat-seq osd.0 2026-03-10T07:02:44.411 INFO:teuthology.orchestra.run.vm01.stdout:51539607555 2026-03-10T07:02:44.412 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd last-stat-seq osd.1 2026-03-10T07:02:44.572 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:44.683 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:44.853 INFO:teuthology.orchestra.run.vm01.stdout:34359738372 2026-03-10T07:02:44.907 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738373 got 34359738372 for osd.0 2026-03-10T07:02:44.953 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:44 vm01 ceph-mon[50695]: pgmap v29: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:44.953 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:44 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1331477945' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:02:44.954 INFO:teuthology.orchestra.run.vm01.stdout:51539607554 2026-03-10T07:02:45.009 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607555 got 51539607554 for osd.1 2026-03-10T07:02:45.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:44 vm08 ceph-mon[51337]: pgmap v29: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:45.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:44 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1331477945' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T07:02:45.907 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd last-stat-seq osd.0 2026-03-10T07:02:46.009 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph osd last-stat-seq osd.1 2026-03-10T07:02:46.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:45 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/569755091' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:02:46.028 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:45 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1824033299' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:02:46.101 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:46.245 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:46.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:45 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/569755091' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:02:46.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:45 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1824033299' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:02:46.401 INFO:teuthology.orchestra.run.vm01.stdout:34359738373 2026-03-10T07:02:46.480 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738373 got 34359738373 for osd.0 2026-03-10T07:02:46.480 DEBUG:teuthology.parallel:result is None 2026-03-10T07:02:46.527 INFO:teuthology.orchestra.run.vm01.stdout:51539607555 2026-03-10T07:02:46.581 INFO:tasks.cephadm.ceph_manager.ceph:need seq 51539607555 got 51539607555 for osd.1 2026-03-10T07:02:46.581 DEBUG:teuthology.parallel:result is None 2026-03-10T07:02:46.581 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T07:02:46.581 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph pg dump --format=json 2026-03-10T07:02:46.759 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:46.831 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:46 vm01 ceph-mon[50695]: pgmap v30: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:46.832 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:46 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1220942213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:02:46.832 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:46 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/1940532643' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:02:46.989 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:46.989 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T07:02:47.060 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":30,"stamp":"2026-03-10T07:02:45.495319+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":2,"num_per_pool_osds":2,"num_per_pool_omap_osds":0,"kb":41934848,"kb_used":463508,"kb_used_data":224,"kb_used_omap":3,"kb_used_meta":53628,"kb_avail":41471340,"statfs":{"total":42941284352,"available":42466652160,"internally_reserved":0,"allocated":229376,"data_stored":56464,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":3179,"internal_metadata":54915989},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":1,"up_from":12,"seq":51539607555,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436552,"kb_used_data":112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530872,"statfs":{"total":21470642176,"available":21023612928,"internally_reserved":0,"allocated":114688,"data_stored":28232,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738373,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26956,"kb_used_data":112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940468,"statfs":{"total":21470642176,"available":21443039232,"internally_reserved":0,"allocated":114688,"data_stored":28232,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-10T07:02:47.061 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph pg dump --format=json 2026-03-10T07:02:47.250 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:47.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:46 vm08 ceph-mon[51337]: pgmap v30: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:47.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:46 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1220942213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T07:02:47.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:46 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/1940532643' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T07:02:47.492 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:47.492 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-10T07:02:47.565 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":30,"stamp":"2026-03-10T07:02:45.495319+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":0,"num_osds":2,"num_per_pool_osds":2,"num_per_pool_omap_osds":0,"kb":41934848,"kb_used":463508,"kb_used_data":224,"kb_used_omap":3,"kb_used_meta":53628,"kb_avail":41471340,"statfs":{"total":42941284352,"available":42466652160,"internally_reserved":0,"allocated":229376,"data_stored":56464,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":3179,"internal_metadata":54915989},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[],"pool_stats":[],"osd_stats":[{"osd":1,"up_from":12,"seq":51539607555,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":436552,"kb_used_data":112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530872,"statfs":{"total":21470642176,"available":21023612928,"internally_reserved":0,"allocated":114688,"data_stored":28232,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738373,"num_pgs":0,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":26956,"kb_used_data":112,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20940468,"statfs":{"total":21470642176,"available":21443039232,"internally_reserved":0,"allocated":114688,"data_stored":28232,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[]}} 2026-03-10T07:02:47.565 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T07:02:47.565 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T07:02:47.565 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T07:02:47.565 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph health --format=json 2026-03-10T07:02:47.745 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:48.004 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:02:48.004 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T07:02:48.037 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:47 vm01 ceph-mon[50695]: from='client.14294 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:02:48.078 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T07:02:48.078 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T07:02:48.078 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T07:02:48.080 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm01.local 2026-03-10T07:02:48.080 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- bash -c 'set -ex 2026-03-10T07:02:48.080 DEBUG:teuthology.orchestra.run.vm01:> HOSTNAMES=$(ceph orch host ls --format json | jq -r '"'"'.[] | .hostname'"'"') 2026-03-10T07:02:48.080 DEBUG:teuthology.orchestra.run.vm01:> for host in $HOSTNAMES; do 2026-03-10T07:02:48.080 DEBUG:teuthology.orchestra.run.vm01:> # do a check-host on each host to make sure it'"'"'s reachable 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> ceph cephadm check-host ${host} 2> ${host}-ok.txt 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> HOST_OK=$(cat ${host}-ok.txt) 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> if ! grep -q "Host looks OK" <<< "$HOST_OK"; then 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> printf "Failed host check:\n\n$HOST_OK" 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> exit 1 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> fi 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> done 2026-03-10T07:02:48.081 DEBUG:teuthology.orchestra.run.vm01:> ' 2026-03-10T07:02:48.261 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:47 vm08 ceph-mon[51337]: from='client.14294 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:02:48.262 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:48.333 INFO:teuthology.orchestra.run.vm01.stderr:++ ceph orch host ls --format json 2026-03-10T07:02:48.333 INFO:teuthology.orchestra.run.vm01.stderr:++ jq -r '.[] | .hostname' 2026-03-10T07:02:48.510 INFO:teuthology.orchestra.run.vm01.stderr:+ HOSTNAMES='vm01 2026-03-10T07:02:48.510 INFO:teuthology.orchestra.run.vm01.stderr:vm08' 2026-03-10T07:02:48.510 INFO:teuthology.orchestra.run.vm01.stderr:+ for host in $HOSTNAMES 2026-03-10T07:02:48.510 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph cephadm check-host vm01 2026-03-10T07:02:48.857 INFO:teuthology.orchestra.run.vm01.stdout:vm01 (None) ok 2026-03-10T07:02:48.869 INFO:teuthology.orchestra.run.vm01.stderr:++ cat vm01-ok.txt 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:+ HOST_OK='podman (/usr/bin/podman) version 5.8.0 is present 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:systemctl is present 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:lvcreate is present 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:Unit chronyd.service is enabled and running 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:Hostname "vm01" matches what is expected. 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:Host looks OK' 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:+ grep -q 'Host looks OK' 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:+ for host in $HOSTNAMES 2026-03-10T07:02:48.870 INFO:teuthology.orchestra.run.vm01.stderr:+ ceph cephadm check-host vm08 2026-03-10T07:02:49.234 INFO:teuthology.orchestra.run.vm01.stdout:vm08 (None) ok 2026-03-10T07:02:49.234 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:48 vm01 ceph-mon[50695]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:02:49.234 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:48 vm01 ceph-mon[50695]: pgmap v31: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:49.234 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:48 vm01 ceph-mon[50695]: from='client.? 192.168.123.101:0/895072316' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:02:49.249 INFO:teuthology.orchestra.run.vm01.stderr:++ cat vm08-ok.txt 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:+ HOST_OK='podman (/usr/bin/podman) version 5.8.0 is present 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:systemctl is present 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:lvcreate is present 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:Unit chronyd.service is enabled and running 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:Hostname "vm08" matches what is expected. 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:Host looks OK' 2026-03-10T07:02:49.250 INFO:teuthology.orchestra.run.vm01.stderr:+ grep -q 'Host looks OK' 2026-03-10T07:02:49.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:48 vm08 ceph-mon[51337]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T07:02:49.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:48 vm08 ceph-mon[51337]: pgmap v31: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-10T07:02:49.262 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:48 vm08 ceph-mon[51337]: from='client.? 192.168.123.101:0/895072316' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T07:02:49.292 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T07:02:49.294 INFO:tasks.cephadm:Teardown begin 2026-03-10T07:02:49.295 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:02:49.323 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:02:49.351 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T07:02:49.351 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 -- ceph mgr module disable cephadm 2026-03-10T07:02:49.546 INFO:teuthology.orchestra.run.vm01.stderr:Inferring config /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/mon.a/config 2026-03-10T07:02:49.564 INFO:teuthology.orchestra.run.vm01.stderr:Error: statfs /etc/ceph/ceph.client.admin.keyring: no such file or directory 2026-03-10T07:02:49.582 DEBUG:teuthology.orchestra.run:got remote process result: 125 2026-03-10T07:02:49.583 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T07:02:49.583 DEBUG:teuthology.orchestra.run.vm01:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T07:02:49.599 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T07:02:49.617 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T07:02:49.617 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T07:02:49.617 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a 2026-03-10T07:02:49.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:49 vm01 systemd[1]: Stopping Ceph mon.a for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:02:49.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a[50691]: 2026-03-10T07:02:49.751+0000 7efdf696b640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T07:02:49.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:49 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a[50691]: 2026-03-10T07:02:49.751+0000 7efdf696b640 -1 mon.a@0(leader) e2 *** Got Signal Terminated *** 2026-03-10T07:02:49.907 INFO:journalctl@ceph.mon.a.vm01.stdout:Mar 10 07:02:49 vm01 podman[61699]: 2026-03-10 07:02:49.90780966 +0000 UTC m=+0.171425127 container died 43de93e2c1cd8b77628d3fba5645d4c907d3ca75a9c647eeccef37420ccd4a25 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-a, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T07:02:50.003 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.a.service' 2026-03-10T07:02:50.050 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:02:50.050 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T07:02:50.050 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T07:02:50.051 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.b 2026-03-10T07:02:50.411 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:50 vm08 systemd[1]: Stopping Ceph mon.b for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:02:50.411 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:50 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-b[51333]: 2026-03-10T07:02:50.162+0000 7f02600b4640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T07:02:50.411 INFO:journalctl@ceph.mon.b.vm08.stdout:Mar 10 07:02:50 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mon-b[51333]: 2026-03-10T07:02:50.162+0000 7f02600b4640 -1 mon.b@1(peon) e2 *** Got Signal Terminated *** 2026-03-10T07:02:50.505 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mon.b.service' 2026-03-10T07:02:50.552 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:02:50.552 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T07:02:50.552 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T07:02:50.552 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a 2026-03-10T07:02:50.813 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service' 2026-03-10T07:02:50.837 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 systemd[1]: Stopping Ceph mgr.a for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:02:50.837 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 podman[61800]: 2026-03-10 07:02:50.7253468 +0000 UTC m=+0.067989718 container died 9e1d6e4d2b806b70ac439410075661d43e951ac25f0f4e584d08414cfc6a3fc3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T07:02:50.838 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 podman[61800]: 2026-03-10 07:02:50.745899466 +0000 UTC m=+0.088542384 container remove 9e1d6e4d2b806b70ac439410075661d43e951ac25f0f4e584d08414cfc6a3fc3 (image=quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team ) 2026-03-10T07:02:50.838 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 bash[61800]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-mgr-a 2026-03-10T07:02:50.838 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 systemd[1]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service: Deactivated successfully. 2026-03-10T07:02:50.838 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 systemd[1]: Stopped Ceph mgr.a for c470e80c-1c4e-11f1-89aa-7f5873752d90. 2026-03-10T07:02:50.838 INFO:journalctl@ceph.mgr.a.vm01.stdout:Mar 10 07:02:50 vm01 systemd[1]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.a.service: Consumed 18.369s CPU time. 2026-03-10T07:02:50.853 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:02:50.853 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T07:02:50.853 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-10T07:02:50.853 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.b 2026-03-10T07:02:51.007 INFO:journalctl@ceph.mgr.b.vm08.stdout:Mar 10 07:02:50 vm08 systemd[1]: Stopping Ceph mgr.b for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:02:51.111 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@mgr.b.service' 2026-03-10T07:02:51.150 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:02:51.150 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-10T07:02:51.150 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T07:02:51.150 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@osd.0 2026-03-10T07:02:51.528 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:51 vm01 systemd[1]: Stopping Ceph osd.0 for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:02:51.528 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0[58371]: 2026-03-10T07:02:51.261+0000 7f7782e5d640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T07:02:51.528 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0[58371]: 2026-03-10T07:02:51.261+0000 7f7782e5d640 -1 osd.0 13 *** Got signal Terminated *** 2026-03-10T07:02:51.528 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:51 vm01 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0[58371]: 2026-03-10T07:02:51.261+0000 7f7782e5d640 -1 osd.0 13 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T07:02:56.580 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[61904]: 2026-03-10 07:02:56.282730212 +0000 UTC m=+5.036668564 container died 65643f6f99551723fbb33a3e34c795e93c1286f4f3d0468176bc8e3af31b41c5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223) 2026-03-10T07:02:56.580 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[61904]: 2026-03-10 07:02:56.340046208 +0000 UTC m=+5.093984560 container remove 65643f6f99551723fbb33a3e34c795e93c1286f4f3d0468176bc8e3af31b41c5 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/) 2026-03-10T07:02:56.580 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 bash[61904]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.606659883 +0000 UTC m=+0.034368609 container create 1973860b8ef66ad7022cce02fd6105689724cfd3cf77d2f9fd562d0fa847a3a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0-deactivate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.657912402 +0000 UTC m=+0.085621137 container init 1973860b8ef66ad7022cce02fd6105689724cfd3cf77d2f9fd562d0fa847a3a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0-deactivate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20260223, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.665565742 +0000 UTC m=+0.093274477 container start 1973860b8ef66ad7022cce02fd6105689724cfd3cf77d2f9fd562d0fa847a3a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2) 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.667912613 +0000 UTC m=+0.095621348 container attach 1973860b8ef66ad7022cce02fd6105689724cfd3cf77d2f9fd562d0fa847a3a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, org.label-schema.build-date=20260223) 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.592980046 +0000 UTC m=+0.020688781 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.810839146 +0000 UTC m=+0.238547881 container died 1973860b8ef66ad7022cce02fd6105689724cfd3cf77d2f9fd562d0fa847a3a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0-deactivate, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git) 2026-03-10T07:02:56.832 INFO:journalctl@ceph.osd.0.vm01.stdout:Mar 10 07:02:56 vm01 podman[62172]: 2026-03-10 07:02:56.833101562 +0000 UTC m=+0.260810297 container remove 1973860b8ef66ad7022cce02fd6105689724cfd3cf77d2f9fd562d0fa847a3a0 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-0-deactivate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True) 2026-03-10T07:02:56.858 DEBUG:teuthology.orchestra.run.vm01:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@osd.0.service' 2026-03-10T07:02:56.908 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:02:56.908 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T07:02:56.908 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T07:02:56.909 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@osd.1 2026-03-10T07:02:57.262 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:02:56 vm08 systemd[1]: Stopping Ceph osd.1 for c470e80c-1c4e-11f1-89aa-7f5873752d90... 2026-03-10T07:02:57.262 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:02:57 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1[55475]: 2026-03-10T07:02:57.018+0000 7f7537694640 -1 received signal: Terminated from /run/podman-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false (PID: 1) UID: 0 2026-03-10T07:02:57.262 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:02:57 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1[55475]: 2026-03-10T07:02:57.018+0000 7f7537694640 -1 osd.1 13 *** Got signal Terminated *** 2026-03-10T07:02:57.262 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:02:57 vm08 ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1[55475]: 2026-03-10T07:02:57.018+0000 7f7537694640 -1 osd.1 13 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57416]: 2026-03-10 07:03:02.050785939 +0000 UTC m=+5.044538947 container died 18d5d3de218134dda31a48ff9a58eeabf16b97ddd59ab052ac435b53f64228c9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team , org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image) 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57416]: 2026-03-10 07:03:02.071450197 +0000 UTC m=+5.065203196 container remove 18d5d3de218134dda31a48ff9a58eeabf16b97ddd59ab052ac435b53f64228c9 (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1, org.opencontainers.image.authors=Ceph Release Team , OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20260223, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, org.label-schema.license=GPLv2, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9) 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 bash[57416]: ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57484]: 2026-03-10 07:03:02.201660088 +0000 UTC m=+0.016743412 container create 9c540e7c1bbf953aa93dab36a0ba6a7707ae7fe344d6da2649102a514bc4086d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1-deactivate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team , CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20260223) 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57484]: 2026-03-10 07:03:02.23976866 +0000 UTC m=+0.054851985 container init 9c540e7c1bbf953aa93dab36a0ba6a7707ae7fe344d6da2649102a514bc4086d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1-deactivate, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, OSD_FLAVOR=default, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57484]: 2026-03-10 07:03:02.24342388 +0000 UTC m=+0.058507194 container start 9c540e7c1bbf953aa93dab36a0ba6a7707ae7fe344d6da2649102a514bc4086d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1-deactivate, org.opencontainers.image.authors=Ceph Release Team , CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/) 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57484]: 2026-03-10 07:03:02.244425976 +0000 UTC m=+0.059509300 container attach 9c540e7c1bbf953aa93dab36a0ba6a7707ae7fe344d6da2649102a514bc4086d (image=quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, name=ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90-osd-1-deactivate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team , org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph-ci.git, CEPH_REF=squid, org.label-schema.build-date=20260223, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=e911bdebe5c8faa3800735d1568fcdca65db60df) 2026-03-10T07:03:02.366 INFO:journalctl@ceph.osd.1.vm08.stdout:Mar 10 07:03:02 vm08 podman[57484]: 2026-03-10 07:03:02.195019331 +0000 UTC m=+0.010102665 image pull 654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T07:03:02.396 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c470e80c-1c4e-11f1-89aa-7f5873752d90@osd.1.service' 2026-03-10T07:03:02.437 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T07:03:02.437 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T07:03:02.437 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 --force --keep-logs 2026-03-10T07:03:02.570 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:03:03.704 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 --force --keep-logs 2026-03-10T07:03:03.841 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:03:04.964 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:03:04.993 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T07:03:05.021 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T07:03:05.021 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938/remote/vm01/crash 2026-03-10T07:03:05.021 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/crash -- . 2026-03-10T07:03:05.063 INFO:teuthology.orchestra.run.vm01.stderr:tar: /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/crash: Cannot open: No such file or directory 2026-03-10T07:03:05.063 INFO:teuthology.orchestra.run.vm01.stderr:tar: Error is not recoverable: exiting now 2026-03-10T07:03:05.065 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938/remote/vm08/crash 2026-03-10T07:03:05.065 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/crash -- . 2026-03-10T07:03:05.093 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/crash: Cannot open: No such file or directory 2026-03-10T07:03:05.093 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-10T07:03:05.094 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T07:03:05.094 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1 2026-03-10T07:03:05.132 INFO:tasks.cephadm:Compressing logs... 2026-03-10T07:03:05.132 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:03:05.174 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:03:05.201 INFO:teuthology.orchestra.run.vm08.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T07:03:05.201 INFO:teuthology.orchestra.run.vm08.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T07:03:05.203 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-volume.log 2026-03-10T07:03:05.203 INFO:teuthology.orchestra.run.vm08.stderr: 80.0% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T07:03:05.203 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mon.b.log 2026-03-10T07:03:05.203 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.audit.log 2026-03-10T07:03:05.205 INFO:teuthology.orchestra.run.vm01.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T07:03:05.205 INFO:teuthology.orchestra.run.vm01.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T07:03:05.206 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-volume.log: /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mon.b.log: 95.0% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-volume.log.gz 2026-03-10T07:03:05.206 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log 2026-03-10T07:03:05.206 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mon.a.log 2026-03-10T07:03:05.207 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.audit.log: 89.8% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.audit.log.gz 2026-03-10T07:03:05.207 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.cephadm.log 2026-03-10T07:03:05.207 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/cephadm.log: 85.5% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T07:03:05.207 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log 2026-03-10T07:03:05.207 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log: 79.5% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log.gz 2026-03-10T07:03:05.208 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mgr.b.log 2026-03-10T07:03:05.208 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.cephadm.log: 75.8% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.cephadm.log.gz 2026-03-10T07:03:05.208 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-osd.1.log 2026-03-10T07:03:05.210 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mgr.b.log: 89.8% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mgr.b.log.gz 2026-03-10T07:03:05.212 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mgr.a.log 2026-03-10T07:03:05.212 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log: 81.7% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.log.gz 2026-03-10T07:03:05.212 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.audit.log 2026-03-10T07:03:05.215 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-osd.1.log: 92.3% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mon.b.log.gz 2026-03-10T07:03:05.224 INFO:teuthology.orchestra.run.vm08.stderr: 93.8% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-osd.1.log.gz 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.029s 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.026s 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.021s 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.cephadm.log 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.audit.log: 89.7% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.audit.log.gz 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-volume.log 2026-03-10T07:03:05.225 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.cephadm.log: 79.2% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph.cephadm.log.gz 2026-03-10T07:03:05.229 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-osd.0.log 2026-03-10T07:03:05.233 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-volume.log: 94.9% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-volume.log.gz 2026-03-10T07:03:05.237 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-osd.0.log: 88.6% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mgr.a.log.gz 2026-03-10T07:03:05.243 INFO:teuthology.orchestra.run.vm01.stderr: 93.8% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-osd.0.log.gz 2026-03-10T07:03:05.261 INFO:teuthology.orchestra.run.vm01.stderr: 91.6% -- replaced with /var/log/ceph/c470e80c-1c4e-11f1-89aa-7f5873752d90/ceph-mon.a.log.gz 2026-03-10T07:03:05.262 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-10T07:03:05.262 INFO:teuthology.orchestra.run.vm01.stderr:real 0m0.071s 2026-03-10T07:03:05.262 INFO:teuthology.orchestra.run.vm01.stderr:user 0m0.094s 2026-03-10T07:03:05.262 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m0.018s 2026-03-10T07:03:05.263 INFO:tasks.cephadm:Archiving logs... 2026-03-10T07:03:05.263 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938/remote/vm01/log 2026-03-10T07:03:05.263 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T07:03:05.337 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938/remote/vm08/log 2026-03-10T07:03:05.337 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T07:03:05.365 INFO:tasks.cephadm:Removing cluster... 2026-03-10T07:03:05.365 DEBUG:teuthology.orchestra.run.vm01:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 --force 2026-03-10T07:03:05.507 INFO:teuthology.orchestra.run.vm01.stdout:Deleting cluster with fsid: c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:03:05.737 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c470e80c-1c4e-11f1-89aa-7f5873752d90 --force 2026-03-10T07:03:05.869 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: c470e80c-1c4e-11f1-89aa-7f5873752d90 2026-03-10T07:03:06.095 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T07:03:06.095 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T07:03:06.114 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T07:03:06.130 INFO:tasks.cephadm:Teardown complete 2026-03-10T07:03:06.149 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T07:03:06.151 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T07:03:06.151 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T07:03:06.157 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T07:03:06.205 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T07:03:06.208 DEBUG:teuthology.orchestra.run.vm01:> 2026-03-10T07:03:06.208 DEBUG:teuthology.orchestra.run.vm01:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T07:03:06.208 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y remove $d || true 2026-03-10T07:03:06.208 DEBUG:teuthology.orchestra.run.vm01:> done 2026-03-10T07:03:06.214 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-10T07:03:06.215 DEBUG:teuthology.orchestra.run.vm08:> 2026-03-10T07:03:06.215 DEBUG:teuthology.orchestra.run.vm08:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-10T07:03:06.215 DEBUG:teuthology.orchestra.run.vm08:> sudo yum -y remove $d || true 2026-03-10T07:03:06.215 DEBUG:teuthology.orchestra.run.vm08:> done 2026-03-10T07:03:06.400 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:06.400 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:Remove 2 Packages 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 39 M 2026-03-10T07:03:06.401 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:06.403 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:06.403 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:06.417 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:06.417 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:06.421 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 39 M 2026-03-10T07:03:06.422 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:06.424 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:06.424 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:06.438 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:06.438 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:06.449 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:06.471 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:06.471 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.472 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:06.472 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T07:03:06.472 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T07:03:06.472 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T07:03:06.472 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.476 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.484 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.495 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.495 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:06.495 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-10T07:03:06.495 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-10T07:03:06.496 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-10T07:03:06.496 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.498 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.499 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T07:03:06.507 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.523 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T07:03:06.567 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T07:03:06.567 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.602 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T07:03:06.603 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:06.623 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T07:03:06.623 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.623 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:06.623 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T07:03:06.623 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.623 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:06.654 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-10T07:03:06.654 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.655 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:06.655 INFO:teuthology.orchestra.run.vm08.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-10T07:03:06.655 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.655 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:06.841 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:06.841 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:Remove 4 Packages 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 212 M 2026-03-10T07:03:06.842 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:06.845 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:06.845 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:06.862 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:Remove 4 Packages 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 212 M 2026-03-10T07:03:06.864 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:06.867 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:06.867 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:06.871 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:06.871 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:06.891 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:06.891 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:06.940 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:06.946 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T07:03:06.949 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T07:03:06.952 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T07:03:06.958 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:06.963 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T07:03:06.965 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-10T07:03:06.968 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T07:03:06.968 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-10T07:03:06.985 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T07:03:07.038 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T07:03:07.038 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T07:03:07.038 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T07:03:07.039 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T07:03:07.049 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-10T07:03:07.050 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-10T07:03:07.050 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-10T07:03:07.050 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.096 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.105 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:07.307 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:Remove 8 Packages 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 28 M 2026-03-10T07:03:07.308 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:07.311 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:07.311 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:07.318 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T07:03:07.319 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout:Remove 8 Packages 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 28 M 2026-03-10T07:03:07.320 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:07.323 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:07.323 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:07.334 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:07.335 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:07.349 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:07.349 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:07.380 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:07.386 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T07:03:07.389 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T07:03:07.391 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T07:03:07.392 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:07.393 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T07:03:07.396 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T07:03:07.397 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T07:03:07.398 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T07:03:07.401 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-10T07:03:07.402 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-10T07:03:07.405 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-10T07:03:07.408 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-10T07:03:07.409 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-10T07:03:07.418 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T07:03:07.419 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:07.419 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T07:03:07.419 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T07:03:07.419 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T07:03:07.419 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.419 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T07:03:07.425 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T07:03:07.428 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T07:03:07.428 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:07.428 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-10T07:03:07.428 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-10T07:03:07.428 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-10T07:03:07.428 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.429 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T07:03:07.437 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-10T07:03:07.444 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T07:03:07.445 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:07.445 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T07:03:07.445 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T07:03:07.445 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T07:03:07.445 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.446 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T07:03:07.459 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T07:03:07.459 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:07.459 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-10T07:03:07.459 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-10T07:03:07.459 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-10T07:03:07.459 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.461 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T07:03:07.539 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T07:03:07.540 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T07:03:07.552 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-10T07:03:07.553 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.596 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:07.612 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: lua-5.4.4-4.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: unzip-6.0-59.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: zip-3.0-35.el9.x86_64 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.613 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:07.823 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T07:03:07.829 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T07:03:07.830 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout:=========================================================================================== 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout:Remove 102 Packages 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 613 M 2026-03-10T07:03:07.831 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:07.847 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-10T07:03:07.853 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-10T07:03:07.854 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout:=========================================================================================== 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout:Remove 102 Packages 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 613 M 2026-03-10T07:03:07.855 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:07.857 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:07.858 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:07.882 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:07.883 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:07.967 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:07.967 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:07.995 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:07.996 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:08.124 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:08.124 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T07:03:08.132 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T07:03:08.144 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:08.144 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T07:03:08.150 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:08.150 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.150 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T07:03:08.150 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T07:03:08.150 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T07:03:08.150 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:08.151 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:08.170 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/102 2026-03-10T07:03:08.176 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:08.189 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:08.200 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T07:03:08.200 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T07:03:08.202 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:08.226 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/102 2026-03-10T07:03:08.226 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T07:03:08.256 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T07:03:08.265 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T07:03:08.269 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T07:03:08.269 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T07:03:08.280 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T07:03:08.282 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/102 2026-03-10T07:03:08.287 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T07:03:08.291 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T07:03:08.291 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/102 2026-03-10T07:03:08.296 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/102 2026-03-10T07:03:08.296 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T07:03:08.299 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T07:03:08.303 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T07:03:08.308 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T07:03:08.315 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/102 2026-03-10T07:03:08.319 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/102 2026-03-10T07:03:08.323 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:08.323 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.323 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T07:03:08.323 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T07:03:08.323 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T07:03:08.323 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:08.328 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:08.328 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/102 2026-03-10T07:03:08.332 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/102 2026-03-10T07:03:08.336 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:08.352 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T07:03:08.352 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.352 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T07:03:08.352 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:08.357 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:08.357 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.357 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-10T07:03:08.357 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-10T07:03:08.357 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-10T07:03:08.357 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:08.360 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T07:03:08.364 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:08.370 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T07:03:08.372 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T07:03:08.374 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:08.377 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T07:03:08.382 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T07:03:08.391 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T07:03:08.392 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T07:03:08.392 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.392 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-10T07:03:08.392 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:08.401 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T07:03:08.403 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T07:03:08.410 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T07:03:08.412 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/102 2026-03-10T07:03:08.415 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/102 2026-03-10T07:03:08.420 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/102 2026-03-10T07:03:08.420 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T07:03:08.425 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/102 2026-03-10T07:03:08.427 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T07:03:08.434 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/102 2026-03-10T07:03:08.447 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/102 2026-03-10T07:03:08.453 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/102 2026-03-10T07:03:08.457 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T07:03:08.463 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/102 2026-03-10T07:03:08.465 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T07:03:08.468 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T07:03:08.470 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/102 2026-03-10T07:03:08.478 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T07:03:08.502 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/102 2026-03-10T07:03:08.513 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/102 2026-03-10T07:03:08.516 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/102 2026-03-10T07:03:08.520 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T07:03:08.520 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T07:03:08.525 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/102 2026-03-10T07:03:08.528 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T07:03:08.537 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/102 2026-03-10T07:03:08.537 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T07:03:08.543 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/102 2026-03-10T07:03:08.631 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T07:03:08.642 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/102 2026-03-10T07:03:08.647 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T07:03:08.658 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/102 2026-03-10T07:03:08.663 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T07:03:08.663 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T07:03:08.663 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:08.664 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T07:03:08.674 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T07:03:08.674 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-10T07:03:08.674 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:08.675 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T07:03:08.693 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T07:03:08.703 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/102 2026-03-10T07:03:08.709 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T07:03:08.715 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T07:03:08.718 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T07:03:08.720 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/102 2026-03-10T07:03:08.720 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T07:03:08.726 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/102 2026-03-10T07:03:08.729 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/102 2026-03-10T07:03:08.732 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/102 2026-03-10T07:03:08.740 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T07:03:08.741 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.741 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T07:03:08.741 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T07:03:08.741 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T07:03:08.741 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:08.742 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T07:03:08.756 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T07:03:08.758 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T07:03:08.758 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.758 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-10T07:03:08.758 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-10T07:03:08.758 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-10T07:03:08.758 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:08.760 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T07:03:08.760 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T07:03:08.763 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T07:03:08.766 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T07:03:08.769 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T07:03:08.773 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T07:03:08.778 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T07:03:08.778 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/102 2026-03-10T07:03:08.783 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/102 2026-03-10T07:03:08.783 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T07:03:08.785 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/102 2026-03-10T07:03:08.787 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 38/102 2026-03-10T07:03:08.789 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 39/102 2026-03-10T07:03:08.792 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 40/102 2026-03-10T07:03:08.797 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 41/102 2026-03-10T07:03:08.801 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 42/102 2026-03-10T07:03:08.839 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T07:03:08.852 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T07:03:08.852 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 43/102 2026-03-10T07:03:08.855 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T07:03:08.859 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T07:03:08.862 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T07:03:08.863 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 44/102 2026-03-10T07:03:08.866 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T07:03:08.866 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 45/102 2026-03-10T07:03:08.869 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T07:03:08.870 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 46/102 2026-03-10T07:03:08.872 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 47/102 2026-03-10T07:03:08.876 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 48/102 2026-03-10T07:03:08.878 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 49/102 2026-03-10T07:03:08.890 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T07:03:08.890 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.890 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T07:03:08.890 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:08.890 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T07:03:08.899 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T07:03:08.901 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T07:03:08.903 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T07:03:08.904 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T07:03:08.905 INFO:teuthology.orchestra.run.vm08.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-10T07:03:08.905 INFO:teuthology.orchestra.run.vm08.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-10T07:03:08.905 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:08.905 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T07:03:08.906 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T07:03:08.909 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T07:03:08.911 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T07:03:08.914 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T07:03:08.915 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 50/102 2026-03-10T07:03:08.917 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T07:03:08.917 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 51/102 2026-03-10T07:03:08.919 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 52/102 2026-03-10T07:03:08.920 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T07:03:08.922 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ply-3.11-14.el9.noarch 53/102 2026-03-10T07:03:08.924 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 54/102 2026-03-10T07:03:08.926 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 55/102 2026-03-10T07:03:08.928 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T07:03:08.929 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 56/102 2026-03-10T07:03:08.932 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 57/102 2026-03-10T07:03:08.933 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T07:03:08.935 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T07:03:08.935 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 58/102 2026-03-10T07:03:08.938 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T07:03:08.940 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T07:03:08.943 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 59/102 2026-03-10T07:03:08.946 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T07:03:08.947 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 60/102 2026-03-10T07:03:08.949 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 61/102 2026-03-10T07:03:08.950 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T07:03:08.952 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 62/102 2026-03-10T07:03:08.955 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 63/102 2026-03-10T07:03:08.956 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T07:03:08.960 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 64/102 2026-03-10T07:03:08.960 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T07:03:08.964 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 65/102 2026-03-10T07:03:08.966 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T07:03:08.969 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 66/102 2026-03-10T07:03:08.970 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T07:03:08.973 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 67/102 2026-03-10T07:03:08.973 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T07:03:08.976 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T07:03:08.979 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 68/102 2026-03-10T07:03:08.981 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T07:03:08.982 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 69/102 2026-03-10T07:03:08.985 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T07:03:08.986 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 70/102 2026-03-10T07:03:08.989 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T07:03:08.989 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 71/102 2026-03-10T07:03:08.995 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 72/102 2026-03-10T07:03:08.997 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T07:03:09.000 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 73/102 2026-03-10T07:03:09.002 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T07:03:09.003 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 74/102 2026-03-10T07:03:09.006 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T07:03:09.009 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T07:03:09.010 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T07:03:09.012 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 75/102 2026-03-10T07:03:09.017 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T07:03:09.019 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 76/102 2026-03-10T07:03:09.021 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T07:03:09.022 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 77/102 2026-03-10T07:03:09.025 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 78/102 2026-03-10T07:03:09.027 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 79/102 2026-03-10T07:03:09.033 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 80/102 2026-03-10T07:03:09.037 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 81/102 2026-03-10T07:03:09.042 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T07:03:09.042 INFO:teuthology.orchestra.run.vm01.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T07:03:09.042 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:09.050 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T07:03:09.058 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T07:03:09.058 INFO:teuthology.orchestra.run.vm08.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-10T07:03:09.058 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:09.066 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T07:03:09.081 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T07:03:09.081 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T07:03:09.095 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T07:03:09.096 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 82/102 2026-03-10T07:03:09.096 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T07:03:09.100 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T07:03:09.104 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T07:03:09.105 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T07:03:09.105 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T07:03:09.110 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 83/102 2026-03-10T07:03:09.115 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 84/102 2026-03-10T07:03:09.118 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 85/102 2026-03-10T07:03:09.120 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 86/102 2026-03-10T07:03:09.120 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /sys 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /proc 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /mnt 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /var/tmp 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /home 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /root 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout:skipping the directory /tmp 2026-03-10T07:03:14.937 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:14.946 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T07:03:14.967 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T07:03:14.967 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T07:03:14.978 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T07:03:14.981 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T07:03:14.984 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T07:03:14.987 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T07:03:14.989 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T07:03:14.989 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T07:03:15.005 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T07:03:15.007 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T07:03:15.009 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 87/102 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-10T07:03:15.011 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.012 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T07:03:15.015 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T07:03:15.019 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 88/102 2026-03-10T07:03:15.021 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T07:03:15.029 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T07:03:15.035 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T07:03:15.035 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T07:03:15.035 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T07:03:15.035 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T07:03:15.041 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 89/102 2026-03-10T07:03:15.043 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 90/102 2026-03-10T07:03:15.046 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 91/102 2026-03-10T07:03:15.048 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 92/102 2026-03-10T07:03:15.050 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 93/102 2026-03-10T07:03:15.050 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T07:03:15.063 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 94/102 2026-03-10T07:03:15.064 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 95/102 2026-03-10T07:03:15.067 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 96/102 2026-03-10T07:03:15.070 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 97/102 2026-03-10T07:03:15.072 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 98/102 2026-03-10T07:03:15.077 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 99/102 2026-03-10T07:03:15.085 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 100/102 2026-03-10T07:03:15.089 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 101/102 2026-03-10T07:03:15.089 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T07:03:15.136 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T07:03:15.137 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T07:03:15.137 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T07:03:15.139 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T07:03:15.140 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T07:03:15.141 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T07:03:15.188 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/102 2026-03-10T07:03:15.189 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/102 2026-03-10T07:03:15.190 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 83/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 84/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 85/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 86/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 87/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 88/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 89/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 90/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 91/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 92/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 93/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 94/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 95/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 96/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 97/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 98/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 99/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 100/102 2026-03-10T07:03:15.191 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 101/102 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T07:03:15.221 INFO:teuthology.orchestra.run.vm08.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T07:03:15.222 INFO:teuthology.orchestra.run.vm08.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:15.223 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 102/102 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-10T07:03:15.268 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-10T07:03:15.269 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.270 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:15.442 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 775 k 2026-03-10T07:03:15.443 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:15.445 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:15.445 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:15.446 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:15.447 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:15.464 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:15.464 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T07:03:15.488 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Remove 1 Package 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 775 k 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:15.494 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:15.510 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:15.510 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T07:03:15.602 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T07:03:15.624 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T07:03:15.645 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T07:03:15.645 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:15.645 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:15.645 INFO:teuthology.orchestra.run.vm08.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.645 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:15.645 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:15.665 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-10T07:03:15.665 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.665 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:15.665 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-10T07:03:15.665 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:15.665 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:15.836 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T07:03:15.836 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:15.839 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-immutable-object-cache 2026-03-10T07:03:15.839 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:15.840 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:15.840 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:15.841 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:15.842 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:15.843 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:15.843 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:16.010 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr 2026-03-10T07:03:16.011 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:16.013 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:16.014 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:16.014 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:16.015 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr 2026-03-10T07:03:16.016 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:16.019 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:16.019 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:16.019 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:16.186 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T07:03:16.186 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:16.189 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:16.189 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:16.189 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:16.202 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-dashboard 2026-03-10T07:03:16.202 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:16.205 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:16.206 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:16.206 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:16.373 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T07:03:16.373 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:16.376 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:16.377 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:16.377 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:16.378 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-10T07:03:16.378 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:16.382 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:16.382 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:16.382 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:16.558 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-rook 2026-03-10T07:03:16.558 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:16.561 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:16.561 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:16.561 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:16.562 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-rook 2026-03-10T07:03:16.562 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:16.565 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:16.566 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:16.566 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:16.724 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T07:03:16.724 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:16.727 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:16.727 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:16.727 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:16.754 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-mgr-cephadm 2026-03-10T07:03:16.754 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:16.757 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:16.758 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:16.758 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:16.896 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:16.896 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:16.896 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:Remove 1 Package 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.6 M 2026-03-10T07:03:16.897 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:16.898 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:16.898 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:16.908 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:16.908 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:16.932 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:16.946 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T07:03:16.952 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:Remove 1 Package 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 3.6 M 2026-03-10T07:03:16.953 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:16.955 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:16.955 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:16.964 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:16.964 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:16.995 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:17.008 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T07:03:17.010 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T07:03:17.047 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T07:03:17.048 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.048 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:17.048 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.048 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.048 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:17.077 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T07:03:17.125 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-10T07:03:17.125 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.125 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:17.125 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.125 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.125 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:17.218 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: ceph-volume 2026-03-10T07:03:17.218 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:17.221 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:17.221 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:17.221 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:17.328 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: ceph-volume 2026-03-10T07:03:17.328 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:17.331 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:17.332 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:17.332 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T07:03:17.404 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout:Remove 2 Packages 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 610 k 2026-03-10T07:03:17.405 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:17.407 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:17.407 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:17.417 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:17.417 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:17.443 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:17.445 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:17.459 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T07:03:17.527 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T07:03:17.527 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repo Size 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T07:03:17.532 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout:Remove 2 Packages 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 610 k 2026-03-10T07:03:17.533 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:17.534 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:17.535 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:17.545 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:17.545 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:17.571 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:17.573 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.576 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:17.587 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T07:03:17.654 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T07:03:17.654 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-10T07:03:17.711 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-10T07:03:17.711 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.711 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:17.711 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.711 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.711 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.712 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repo Size 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T07:03:17.767 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.768 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:17.768 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:17.768 INFO:teuthology.orchestra.run.vm08.stdout:Remove 3 Packages 2026-03-10T07:03:17.768 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.768 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 3.7 M 2026-03-10T07:03:17.768 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:17.769 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:17.769 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:17.786 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:17.786 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:17.866 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:17.868 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T07:03:17.869 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T07:03:17.869 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repo Size 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Remove 3 Packages 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 3.7 M 2026-03-10T07:03:17.925 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:17.927 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:17.927 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:17.937 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T07:03:17.937 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T07:03:17.937 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T07:03:17.943 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:17.943 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:17.974 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:17.976 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T07:03:17.978 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T07:03:17.978 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T07:03:18.040 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T07:03:18.040 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-10T07:03:18.040 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.087 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:18.171 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: libcephfs-devel 2026-03-10T07:03:18.172 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:18.175 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:18.175 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:18.175 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:18.274 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: libcephfs-devel 2026-03-10T07:03:18.275 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:18.278 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:18.279 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:18.279 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:18.378 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: Package Arch Version Repository Size 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:Removing: 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:Removing dependent packages: 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:Removing unused dependencies: 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:Transaction Summary 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:================================================================================ 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout:Remove 20 Packages 2026-03-10T07:03:18.380 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:18.381 INFO:teuthology.orchestra.run.vm08.stdout:Freed space: 79 M 2026-03-10T07:03:18.381 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction check 2026-03-10T07:03:18.385 INFO:teuthology.orchestra.run.vm08.stdout:Transaction check succeeded. 2026-03-10T07:03:18.385 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction test 2026-03-10T07:03:18.408 INFO:teuthology.orchestra.run.vm08.stdout:Transaction test succeeded. 2026-03-10T07:03:18.408 INFO:teuthology.orchestra.run.vm08.stdout:Running transaction 2026-03-10T07:03:18.451 INFO:teuthology.orchestra.run.vm08.stdout: Preparing : 1/1 2026-03-10T07:03:18.454 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T07:03:18.457 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T07:03:18.460 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T07:03:18.460 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T07:03:18.473 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T07:03:18.476 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T07:03:18.477 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T07:03:18.479 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T07:03:18.480 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T07:03:18.483 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T07:03:18.483 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T07:03:18.487 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:18.488 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Removing: 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Removing dependent packages: 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Removing unused dependencies: 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:================================================================================ 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Remove 20 Packages 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Freed space: 79 M 2026-03-10T07:03:18.489 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-10T07:03:18.493 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-10T07:03:18.493 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-10T07:03:18.497 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T07:03:18.497 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T07:03:18.497 INFO:teuthology.orchestra.run.vm08.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T07:03:18.497 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:18.512 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T07:03:18.514 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-10T07:03:18.514 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T07:03:18.514 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-10T07:03:18.518 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T07:03:18.521 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T07:03:18.524 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T07:03:18.527 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T07:03:18.529 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T07:03:18.531 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T07:03:18.533 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T07:03:18.548 INFO:teuthology.orchestra.run.vm08.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T07:03:18.556 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-10T07:03:18.559 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-10T07:03:18.562 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-10T07:03:18.565 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-10T07:03:18.565 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T07:03:18.580 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-10T07:03:18.582 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-10T07:03:18.584 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-10T07:03:18.585 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T07:03:18.587 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-10T07:03:18.590 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-10T07:03:18.590 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T07:03:18.607 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T07:03:18.607 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T07:03:18.607 INFO:teuthology.orchestra.run.vm01.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-10T07:03:18.607 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.618 INFO:teuthology.orchestra.run.vm08.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T07:03:18.618 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T07:03:18.618 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T07:03:18.618 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T07:03:18.619 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T07:03:18.622 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-10T07:03:18.625 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-10T07:03:18.630 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-10T07:03:18.635 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-10T07:03:18.638 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-10T07:03:18.641 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-10T07:03:18.643 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-10T07:03:18.645 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-10T07:03:18.648 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-10T07:03:18.662 INFO:teuthology.orchestra.run.vm01.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T07:03:18.666 INFO:teuthology.orchestra.run.vm08.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout:Removed: 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T07:03:18.667 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-10T07:03:18.727 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout:Removed: 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-10T07:03:18.772 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:18.893 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: librbd1 2026-03-10T07:03:18.893 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:18.895 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:18.896 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:18.896 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:18.985 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: librbd1 2026-03-10T07:03:18.985 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:18.987 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:18.988 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:18.988 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:19.081 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rados 2026-03-10T07:03:19.081 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:19.084 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:19.084 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:19.084 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:19.174 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rados 2026-03-10T07:03:19.174 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:19.176 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:19.177 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:19.177 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:19.267 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rgw 2026-03-10T07:03:19.267 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:19.269 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:19.269 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:19.269 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:19.336 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rgw 2026-03-10T07:03:19.336 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:19.339 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:19.339 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:19.339 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:19.453 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-cephfs 2026-03-10T07:03:19.453 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:19.456 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:19.456 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:19.456 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:19.499 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-cephfs 2026-03-10T07:03:19.500 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:19.501 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:19.502 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:19.502 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:19.639 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: python3-rbd 2026-03-10T07:03:19.639 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:19.642 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:19.642 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:19.643 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:19.684 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: python3-rbd 2026-03-10T07:03:19.685 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:19.687 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:19.687 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:19.687 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:19.835 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-fuse 2026-03-10T07:03:19.835 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:19.838 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:19.838 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:19.838 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:19.872 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-fuse 2026-03-10T07:03:19.872 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:19.874 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:19.875 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:19.875 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:20.026 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-mirror 2026-03-10T07:03:20.026 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:20.029 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:20.029 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:20.029 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:20.061 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-mirror 2026-03-10T07:03:20.061 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:20.063 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:20.064 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:20.064 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:20.221 INFO:teuthology.orchestra.run.vm08.stdout:No match for argument: rbd-nbd 2026-03-10T07:03:20.221 INFO:teuthology.orchestra.run.vm08.stderr:No packages marked for removal. 2026-03-10T07:03:20.223 INFO:teuthology.orchestra.run.vm08.stdout:Dependencies resolved. 2026-03-10T07:03:20.224 INFO:teuthology.orchestra.run.vm08.stdout:Nothing to do. 2026-03-10T07:03:20.224 INFO:teuthology.orchestra.run.vm08.stdout:Complete! 2026-03-10T07:03:20.245 INFO:teuthology.orchestra.run.vm01.stdout:No match for argument: rbd-nbd 2026-03-10T07:03:20.245 INFO:teuthology.orchestra.run.vm01.stderr:No packages marked for removal. 2026-03-10T07:03:20.247 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-10T07:03:20.248 INFO:teuthology.orchestra.run.vm01.stdout:Nothing to do. 2026-03-10T07:03:20.248 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-10T07:03:20.251 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean all 2026-03-10T07:03:20.268 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-10T07:03:20.389 INFO:teuthology.orchestra.run.vm08.stdout:56 files removed 2026-03-10T07:03:20.412 INFO:teuthology.orchestra.run.vm01.stdout:56 files removed 2026-03-10T07:03:20.420 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T07:03:20.442 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T07:03:20.450 DEBUG:teuthology.orchestra.run.vm08:> sudo yum clean expire-cache 2026-03-10T07:03:20.468 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean expire-cache 2026-03-10T07:03:20.616 INFO:teuthology.orchestra.run.vm08.stdout:Cache was expired 2026-03-10T07:03:20.616 INFO:teuthology.orchestra.run.vm08.stdout:0 files removed 2026-03-10T07:03:20.639 INFO:teuthology.orchestra.run.vm01.stdout:Cache was expired 2026-03-10T07:03:20.639 INFO:teuthology.orchestra.run.vm01.stdout:0 files removed 2026-03-10T07:03:20.641 DEBUG:teuthology.parallel:result is None 2026-03-10T07:03:20.662 DEBUG:teuthology.parallel:result is None 2026-03-10T07:03:20.662 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm01.local 2026-03-10T07:03:20.662 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm08.local 2026-03-10T07:03:20.663 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T07:03:20.663 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-10T07:03:20.689 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T07:03:20.690 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-10T07:03:20.755 DEBUG:teuthology.parallel:result is None 2026-03-10T07:03:20.759 DEBUG:teuthology.parallel:result is None 2026-03-10T07:03:20.759 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T07:03:20.761 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T07:03:20.762 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:03:20.802 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T07:03:20.814 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-10T07:03:20.816 INFO:teuthology.orchestra.run.vm08.stderr:bash: line 1: ntpq: command not found 2026-03-10T07:03:20.915 INFO:teuthology.orchestra.run.vm08.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T07:03:20.915 INFO:teuthology.orchestra.run.vm08.stdout:=============================================================================== 2026-03-10T07:03:20.915 INFO:teuthology.orchestra.run.vm08.stdout:^+ vps-fra1.orleans.ddnss.de 2 6 177 125 +140us[ +36us] +/- 12ms 2026-03-10T07:03:20.915 INFO:teuthology.orchestra.run.vm08.stdout:^* time.cloudflare.com 3 6 177 59 -959us[-1122us] +/- 15ms 2026-03-10T07:03:20.915 INFO:teuthology.orchestra.run.vm08.stdout:^+ 141.84.43.75 2 6 177 60 +316us[ +152us] +/- 31ms 2026-03-10T07:03:20.915 INFO:teuthology.orchestra.run.vm08.stdout:^- stratum2-2.NTP.TechFak.N> 2 6 36 255 +1439us[+1338us] +/- 18ms 2026-03-10T07:03:20.916 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-10T07:03:20.916 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-10T07:03:20.916 INFO:teuthology.orchestra.run.vm01.stdout:^- 141.84.43.75 2 6 177 60 +2253us[+2253us] +/- 30ms 2026-03-10T07:03:20.916 INFO:teuthology.orchestra.run.vm01.stdout:^- stratum2-2.NTP.TechFak.N> 2 6 16 256 +1525us[+1011us] +/- 18ms 2026-03-10T07:03:20.916 INFO:teuthology.orchestra.run.vm01.stdout:^* vps-fra1.orleans.ddnss.de 2 6 177 60 +604us[ +543us] +/- 12ms 2026-03-10T07:03:20.916 INFO:teuthology.orchestra.run.vm01.stdout:^+ time.cloudflare.com 3 6 177 60 -597us[ -597us] +/- 15ms 2026-03-10T07:03:20.916 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T07:03:20.919 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T07:03:20.919 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T07:03:20.921 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T07:03:20.923 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T07:03:20.925 INFO:teuthology.task.internal:Duration was 360.123241 seconds 2026-03-10T07:03:20.925 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T07:03:20.927 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T07:03:20.927 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T07:03:20.958 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T07:03:20.993 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T07:03:20.997 INFO:teuthology.orchestra.run.vm08.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-10T07:03:21.290 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T07:03:21.290 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-10T07:03:21.291 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T07:03:21.320 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-10T07:03:21.320 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T07:03:21.354 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T07:03:21.354 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:03:21.362 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:03:21.720 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T07:03:21.721 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:03:21.722 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T07:03:21.748 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:03:21.748 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:03:21.749 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T07:03:21.749 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:03:21.749 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T07:03:21.750 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T07:03:21.750 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T07:03:21.750 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T07:03:21.750 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T07:03:21.750 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T07:03:21.861 INFO:teuthology.orchestra.run.vm08.stderr: 98.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T07:03:21.869 INFO:teuthology.orchestra.run.vm01.stderr: 98.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T07:03:21.871 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T07:03:21.874 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T07:03:21.874 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T07:03:21.938 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T07:03:21.963 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T07:03:21.965 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:03:21.981 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:03:22.009 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-10T07:03:22.028 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-10T07:03:22.043 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:03:22.083 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:03:22.083 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T07:03:22.097 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T07:03:22.097 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T07:03:22.100 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T07:03:22.100 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938/remote/vm01 2026-03-10T07:03:22.100 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T07:03:22.158 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/938/remote/vm08 2026-03-10T07:03:22.158 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T07:03:22.184 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T07:03:22.184 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T07:03:22.201 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T07:03:22.238 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T07:03:22.241 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T07:03:22.241 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T07:03:22.243 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T07:03:22.243 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T07:03:22.258 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T07:03:22.272 INFO:teuthology.orchestra.run.vm01.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 07:03 /home/ubuntu/cephtest 2026-03-10T07:03:22.293 INFO:teuthology.orchestra.run.vm08.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 10 07:03 /home/ubuntu/cephtest 2026-03-10T07:03:22.294 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T07:03:22.300 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/centos_9.stream agent/on mon_election/connectivity task/test_ca_signed_key} duration: 360.12324118614197 flavor: default owner: kyr success: true 2026-03-10T07:03:22.300 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T07:03:22.318 INFO:teuthology.run:pass